Introduction
This is a collection of personal notes documenting server infrastructure setup and configuration by Ilham Aulia Majid.
About This Documentation
These notes serve as a reference for setting up and managing VPS infrastructure. They cover everything from initial server security to containerization and private networking, with a focus on reproducible, step-by-step instructions.
Built with mdBook, a command-line tool that creates a searchable, navigable book from Markdown files.
How to Use
The documentation is organized in a logical progression - start with VPS Basics for a secure foundation, then proceed through the chapters based on your needs. Each guide includes prerequisites, step-by-step instructions, and common commands for reference.
VPS Setup
Overview
This guide covers the initial security setup for a fresh VPS. A new VPS typically comes with password authentication enabled and no protection against attacks. We’ll secure it by:
- Setting up SSH key authentication (more secure than passwords)
- Configuring automatic security updates (keeps the system patched)
- Installing Fail2Ban (blocks brute-force attacks)
After completing this guide, your VPS will have a solid security foundation for hosting services.
Prerequisites
- A VPS running Ubuntu Server (commands should be similar on other Debian-based distributions)
- Root or sudo access
- SSH client on your local machine (Terminal on macOS/Linux, or Windows Terminal)
SSH Setup
SSH keys are more secure than passwords because they can’t be guessed or brute-forced. The key pair consists of a private key (stays on your machine) and a public key (goes on the server).
Generate a Key Pair
On your local machine, generate an ed25519 key:
ssh-keygen -t ed25519
Press Enter to accept the default location. Optionally set a passphrase for extra security.
This creates two files:
~/.ssh/id_ed25519- your private key (never share this)~/.ssh/id_ed25519.pub- your public key (safe to share)
Copy the Public Key to VPS
ssh-copy-id <username>@<vps-ip>
This appends your public key to the server’s ~/.ssh/authorized_keys file. You’ll need to enter your password one last time.
Configure SSH Client
Add this to ~/.ssh/config on your local machine to simplify connections:
Host *
AddKeysToAgent yes
IdentitiesOnly yes
ServerAliveInterval 60
IdentityFile ~/.ssh/id_ed25519
# UseKeychain yes # macOS only
Host github.com
HostName github.com
User git
Host vps
HostName <vps-ip>
User <username>
Port 22
| Setting | Purpose |
|---|---|
AddKeysToAgent yes | Automatically add keys to SSH agent |
IdentitiesOnly yes | Only use explicitly configured keys |
ServerAliveInterval 60 | Send keepalive every 60 seconds to prevent disconnection |
IdentityFile | Path to your private key |
Now you can connect with just:
ssh vps
No password needed. Root login should be disabled and key-only authentication enabled on the server.
Timezone
Set the server timezone so logs show the correct local time:
sudo timedatectl set-timezone Asia/Jakarta
Verify with:
timedatectl
Replace Asia/Jakarta with your timezone. List available timezones with timedatectl list-timezones.
Auto Security Updates
Security vulnerabilities are discovered regularly. Unattended-upgrades automatically installs security patches so you don’t have to manually update.
Install and configure:
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades
Select “Yes” when prompted to enable automatic updates.
The system will now:
- Check for security updates daily
- Install them automatically
- Keep your system patched without intervention
Configuration (Optional)
Config file location: /etc/apt/apt.conf.d/50unattended-upgrades
To enable automatic reboots when required (e.g., kernel updates), add:
Unattended-Upgrade::Automatic-Reboot "true";
View update logs at: /var/log/unattended-upgrades/unattended-upgrades.log
Fail2Ban
Fail2Ban monitors log files for failed login attempts. When it detects repeated failures from an IP address, it bans that IP by adding a firewall rule.
This protects against brute-force SSH attacks where attackers try thousands of password combinations.
Install and Enable
sudo apt install -y fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
Check Status
View all active jails:
sudo fail2ban-client status
View SSH jail specifically (shows banned IPs):
sudo fail2ban-client status sshd
Unban an IP
If you accidentally get banned (e.g., too many failed logins):
sudo fail2ban-client set sshd unbanip <ip>
Custom Settings (Optional)
The default settings work well for most cases. To customize, create a local config:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo vim /etc/fail2ban/jail.local
| Setting | Default | Description |
|---|---|---|
maxretry | 5 | Number of failures before ban |
bantime | 10m | How long the ban lasts |
findtime | 10m | Time window for counting failures |
Example: With defaults, 5 failed logins within 10 minutes triggers a 10-minute ban.
SSH jail (sshd) is enabled by default - no extra configuration needed.
UFW Setup
Overview
UFW (Uncomplicated Firewall) is a frontend for iptables that controls which ports are accessible from the internet. By default, a VPS has all ports open. UFW lets you block everything except the services you explicitly allow.
Think of it as a gatekeeper: all incoming traffic is denied unless you create a rule to allow it.
How It Works
Internet Traffic
│
▼
┌─────────────────┐
│ UFW Firewall │
│ │
│ Port 22 ✓ ───────► SSH
│ Port 80 ✓ ───────► nginx (HTTP)
│ Port 443 ✓ ───────► nginx (HTTPS)
│ Port 3000 ✗ (blocked)
│ Port 5432 ✗ (blocked)
│ │
└─────────────────┘
Only ports with explicit “allow” rules pass through. Everything else is blocked.
Prerequisites
- VPS setup completed (see VPS Setup)
Installation
sudo apt install -y ufw
Basic Setup
Before enabling UFW, you must allow SSH. Otherwise you will lock yourself out of the server.
sudo ufw allow OpenSSH
This creates a rule allowing incoming connections on port 22 (SSH).
Now enable the firewall:
sudo ufw enable
UFW is now active. All incoming traffic is blocked except SSH.
Verify with:
sudo ufw status
Reading Status Output
To see all rules with numbers (useful for deleting rules later):
sudo ufw status numbered
Understanding the Output
Columns:
- To: Where the traffic is going (destination)
- Action: What UFW does (ALLOW IN/OUT/FWD)
- From: Where the traffic comes from (source)
Actions:
- ALLOW IN: Incoming connections to your server (e.g., SSH, web traffic)
- ALLOW OUT: Outgoing connections from your server (e.g., downloading updates)
- ALLOW FWD: Traffic routing/forwarding through your server (e.g., VPN traffic)
Example Breakdown
[ 1] OpenSSH ALLOW IN Anywhere
→ Allow SSH connections from anywhere to your server (port 22)
[ 2] Nginx Full ALLOW IN Anywhere
→ Allow HTTP/HTTPS connections from anywhere to your server (ports 80 and 443)
[ 3] Anywhere on tailscale0 ALLOW IN Anywhere
→ Allow any incoming traffic on the Tailscale interface (VPN traffic)
[ 4] Anywhere ALLOW FWD Anywhere on tailscale0
→ Allow forwarding traffic FROM Tailscale interface to anywhere (VPN routing)
[ 5] Anywhere ALLOW OUT Anywhere on tailscale0 (out)
→ Allow outgoing traffic TO the Tailscale interface
[ 6] Anywhere on eth0 ALLOW FWD Anywhere on tailscale0
→ Allow forwarding FROM Tailscale TO eth0 (VPN to internet)
[ 7] Anywhere on tailscale0 ALLOW FWD Anywhere on eth0
→ Allow forwarding FROM eth0 TO Tailscale (internet to VPN)
IPv6 Rules:
Rules with (v6) are the same rules but for IPv6 traffic. If you see a rule numbered [1] and [8], they’re the same rule for different IP versions.
Adding Rules
There are several ways to allow traffic through the firewall.
By service name (UFW knows common services):
sudo ufw allow OpenSSH # Port 22
sudo ufw allow 'Nginx Full' # Ports 80 and 443
By port number (when you need a specific port):
sudo ufw allow 80/tcp # HTTP
sudo ufw allow 443/tcp # HTTPS
The /tcp suffix specifies the protocol. Use /udp for UDP traffic.
By port range (for services using multiple ports):
sudo ufw allow 6000:6007/tcp
This allows ports 6000 through 6007.
By interface (for VPN routing, like Tailscale):
sudo ufw allow in on tailscale0
sudo ufw route allow in on tailscale0
This allows traffic on the tailscale0 network interface and permits routing through it.
Removing Rules
First, list rules with numbers:
sudo ufw status numbered
Then delete by number:
sudo ufw delete 3
This removes rule number 3. Rule numbers shift after deletion, so always re-check with status numbered before deleting another.
Alternatively, delete by specification (exactly as you added it):
sudo ufw delete allow 80/tcp
Common Rules Reference
| Service | Command | What it allows |
|---|---|---|
| SSH | sudo ufw allow OpenSSH | Remote terminal access (port 22) |
| HTTP | sudo ufw allow 80/tcp | Web traffic, unencrypted |
| HTTPS | sudo ufw allow 443/tcp | Web traffic, encrypted |
| HTTP + HTTPS | sudo ufw allow 'Nginx Full' | Both web ports at once |
| Ping | sudo ufw allow proto icmp | ICMP ping requests |
Notes
- UFW blocks all incoming traffic by default (deny policy)
- Ping (ICMP) is blocked by default
- Rules persist across reboots
- Always allow SSH before enabling UFW, or you will lose access
- When in doubt, check
sudo ufw statusbefore and after changes
nginx Setup
Overview
nginx (pronounced “engine-x”) is a web server that can serve static files and act as a reverse proxy for backend applications. In a typical setup:
Internet
│
▼
┌─────────────────────────────┐
│ nginx │
│ │
│ :80 (HTTP) ──► redirect │
│ :443 (HTTPS) ──┬──► static files (/var/www/)
│ └──► proxy to localhost:3000
└─────────────────────────────┘
nginx handles:
- SSL/TLS termination (HTTPS)
- Serving static files efficiently
- Proxying requests to backend applications
- Load balancing (if needed)
Prerequisites
- VPS setup completed (see VPS Setup)
- UFW configured (see UFW Setup)
- Domain name pointed to your VPS IP (optional, but required for SSL)
Installation
Update package list and install nginx:
sudo apt update
sudo apt install -y nginx
Check the installed version:
nginx -v
Enable nginx to start on boot and start it now:
sudo systemctl enable nginx
sudo systemctl start nginx
Verify it’s running:
sudo systemctl status nginx
Firewall Rules
nginx needs ports 80 (HTTP) and 443 (HTTPS) open. Allow both with a single command (see UFW Setup for details):
sudo ufw allow 'Nginx Full'
Test Installation
Open your browser and visit:
http://<vps-ip>
You should see the default nginx welcome page. This confirms nginx is installed and the firewall is configured correctly.
Important Directories
nginx organizes configuration files in a specific structure:
| Path | Description |
|---|---|
/etc/nginx/nginx.conf | Main configuration file (rarely edited directly) |
/etc/nginx/sites-available/ | Store all site configurations here |
/etc/nginx/sites-enabled/ | Symlinks to enabled sites (nginx only reads this) |
/var/www/ | Default location for website files |
/var/log/nginx/ | Access and error logs |
The sites-available/sites-enabled pattern lets you easily enable or disable sites without deleting configurations.
Server Block (Virtual Host)
A server block defines how nginx handles requests for a specific domain. Each domain gets its own configuration file.
Create a Configuration File
sudo vim /etc/nginx/sites-available/<domain>
Static Site Configuration
For serving HTML, CSS, and JavaScript files:
server {
listen 80;
listen [::]:80;
server_name <domain>;
root /var/www/<domain>;
index index.html;
location / {
try_files $uri $uri/ =404;
}
access_log /var/log/nginx/<domain>.access.log;
error_log /var/log/nginx/<domain>.error.log;
}
| Directive | Purpose |
|---|---|
listen 80 | Accept HTTP connections on port 80 |
listen [::]:80 | Same for IPv6 |
server_name | The domain this block responds to |
root | Directory containing website files |
index | Default file to serve for directory requests |
try_files | Try the URI as a file, then as a directory, then return 404 |
Create Web Root and Test Page
Create the directory for your website files:
sudo mkdir -p /var/www/<domain>
sudo chown -R $USER:$USER /var/www/<domain>
Create a simple test page:
echo "<h1>Welcome to <domain></h1>" > /var/www/<domain>/index.html
Enable the Site
Create a symlink from sites-available to sites-enabled:
sudo ln -s /etc/nginx/sites-available/<domain> /etc/nginx/sites-enabled/
Test the configuration for syntax errors:
sudo nginx -t
If the test passes, reload nginx to apply changes:
sudo systemctl reload nginx
Visit http://<domain> to see your test page.
Reverse Proxy
A reverse proxy forwards requests to a backend application (e.g., Node.js, Python, Go) running on localhost.
Client Request
│
▼
┌─────────────┐ ┌─────────────────┐
│ nginx │ ──► │ Your App │
│ :443 │ │ localhost:3000 │
└─────────────┘ └─────────────────┘
Reverse Proxy Configuration
server {
listen 80;
server_name <domain>;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
| Directive | Purpose |
|---|---|
proxy_pass | URL of the backend application |
proxy_http_version 1.1 | Use HTTP/1.1 for upstream connections |
Host $host | Pass the original Host header to the backend |
X-Real-IP | Pass the client’s real IP address |
X-Forwarded-For | Chain of proxy IPs |
X-Forwarded-Proto | Original protocol (http or https) |
WebSocket Support
If your backend uses WebSockets (real-time connections), add these headers:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
This tells nginx to upgrade the connection from HTTP to WebSocket when requested.
Serving Multiple Applications
You can serve multiple applications under one domain using different location blocks. Each location can serve static files or proxy to different backend applications.
domain.com/ → /var/www/<domain> (static files)
domain.com/app → localhost:3000 (backend app)
Example Configuration
server {
listen 443 ssl;
server_name <domain>;
# Main site serves static files
location / {
root /var/www/<domain>;
try_files $uri $uri/ =404;
}
# App at /app subpath
location /app {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Trailing Slash in proxy_pass
The trailing slash in proxy_pass changes how nginx forwards requests:
Without trailing slash - preserves the full path:
location /app {
proxy_pass http://localhost:3000;
}
# Request: domain.com/app/page → Backend receives: /app/page
With trailing slash - strips the location prefix:
location /app {
proxy_pass http://localhost:3000/;
}
# Request: domain.com/app/page → Backend receives: /page
Use the trailing slash when your backend application serves from root. Without the trailing slash, your application must handle requests with the /app prefix included.
Application Configuration
Your application must be configured to handle its base path. If using the trailing slash in proxy_pass, your app serves from root as normal. Without the trailing slash, configure your app to serve from /app. If routes return 404 or assets fail to load, check this configuration.
Test by visiting https://<domain>/app in your browser.
Common Commands
| Command | Description |
|---|---|
sudo nginx -t | Test configuration syntax |
sudo systemctl reload nginx | Apply config changes (no downtime) |
sudo systemctl restart nginx | Full restart |
sudo systemctl stop nginx | Stop nginx |
sudo tail -f /var/log/nginx/access.log | Watch access logs in real-time |
sudo tail -f /var/log/nginx/error.log | Watch error logs in real-time |
Always run nginx -t before reloading to catch syntax errors.
SSL Setup
Overview
HTTPS encrypts traffic between clients and your server, protecting sensitive data from interception. Let’s Encrypt provides free SSL/TLS certificates that are trusted by all major browsers.
Certbot is a tool that automates the entire process: obtaining certificates, configuring nginx, and setting up automatic renewal.
Prerequisites
- VPS setup completed (see VPS Setup)
- nginx installed and configured (see nginx Setup)
- Domain name pointed to your VPS IP address
- nginx server block configured for your domain
Installation
Install Certbot and the nginx plugin:
sudo apt install -y certbot python3-certbot-nginx
The nginx plugin allows Certbot to automatically modify your nginx configuration to enable HTTPS.
Getting a Certificate
Single Domain
For a single domain:
sudo certbot --nginx -d <domain>
Certbot will:
- Verify you control the domain (via HTTP challenge)
- Obtain a certificate from Let’s Encrypt
- Automatically configure nginx for HTTPS
- Set up HTTP to HTTPS redirect
Multiple Domains
For multiple domains or subdomains in one certificate:
sudo certbot --nginx -d example.com -d www.example.com -d api.example.com
This creates a single certificate covering all specified domains.
Adding Subdomains Later
If you add a subdomain after initial setup:
sudo certbot --nginx -d new-subdomain.example.com
This creates a separate certificate for the new subdomain.
Certificate Renewal
Certificates expire after 90 days. Certbot installs a systemd timer that automatically renews certificates when they have 30 days or less remaining.
Test Renewal Process
Verify automatic renewal works:
sudo certbot renew --dry-run
This simulates renewal without actually renewing certificates. If successful, automatic renewal is configured correctly.
Manual Renewal
Force renewal of all certificates:
sudo certbot renew
Check Certificate Status
List all certificates with expiration dates:
sudo certbot certificates
Troubleshooting
Port 80 Must Be Open
Certbot uses HTTP (port 80) to verify domain ownership. Ensure UFW allows port 80:
sudo ufw allow 'Nginx Full'
Domain Must Point to VPS
The domain must resolve to your VPS IP address before running Certbot. Verify with:
dig +short <domain>
Certificate Renewal Failures
Check renewal logs if automatic renewal fails:
sudo journalctl -u certbot.timer
sudo tail -f /var/log/letsencrypt/letsencrypt.log
Common Commands
| Command | Description |
|---|---|
sudo certbot --nginx -d <domain> | Obtain and install certificate |
sudo certbot certificates | List all certificates |
sudo certbot renew | Manually renew all certificates |
sudo certbot renew --dry-run | Test renewal process |
sudo certbot delete --cert-name <domain> | Delete a certificate |
SSH Reverse Tunnel
Overview
An SSH reverse tunnel exposes a local service to the internet through a VPS. It works by establishing an outbound SSH connection from your local machine to the VPS, which then forwards incoming traffic back through that connection to your local service.
This is useful when you are behind NAT, a firewall, or lack a public IP address.
Architecture
Internet Request
│
▼
┌─────────────────┐
│ VPS (Public) │
│ nginx :443 │
│ │ │
│ ▼ │
│ localhost:5201 │◄── SSH tunnel listens here
└────────┬────────┘
│
SSH Connection
(outbound from local)
│
▼
┌─────────────────┐
│ Local Machine │
│ localhost:8080 │◄── Your service
└─────────────────┘
Traffic flow:
- Client requests
https://<domain> - nginx terminates TLS and proxies to
127.0.0.1:5201 - Port 5201 is the remote end of the SSH tunnel
- Traffic flows through the tunnel to your local machine on port 8080
Prerequisites
- VPS setup completed (see VPS Setup)
- nginx installed and configured (see nginx Setup)
- A local service running (this guide uses
localhost:8080)
Setup
Configure a subdomain with nginx reverse proxy to 127.0.0.1:5201 and enable SSL (see nginx Setup).
SSH Reverse Tunnel Command
From your local machine:
ssh -N -R 5201:localhost:8080 <username>@<vps-ip>
| Flag | Purpose |
|---|---|
-N | Do not execute a remote command. Port forwarding only. |
-R 5201:localhost:8080 | Bind remote port 5201 to local port 8080. |
Format: -R [remote_port]:[local_host]:[local_port]
The tunnel remains open while the SSH connection is active.
Observability
Stream VPS nginx logs to your local machine:
ssh <username>@<vps-ip> "tail -f /var/log/nginx/access.log" | grep --line-buffered <domain>
Example
# Start local service
npm run dev # localhost:3000
# Establish tunnel
ssh -N -R 5201:localhost:3000 <username>@<vps-ip>
Access from anywhere: https://<domain>
Docker Setup
Overview
Docker is a platform for running applications in isolated containers. A container packages an application with all its dependencies, ensuring it runs the same way everywhere.
┌─────────────────────────────────────────────────────┐
│ Your VPS │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Container 1 │ │ Container 2 │ │ Container 3 │ │
│ │ Node.js │ │ PostgreSQL │ │ Redis │ │
│ │ :3000 │ │ :5432 │ │ :6379 │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌───────────────────────────────────────────────┐ │
│ │ Docker Engine │ │
│ └───────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────┐ │
│ │ Linux Kernel │ │
│ └───────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
Benefits:
- Consistent environments (no “works on my machine” issues)
- Easy deployment and rollback
- Isolation between applications
- Simple dependency management
Prerequisites
Installation
Follow the official Docker installation guide: https://docs.docker.com/engine/install/
Run Docker Without sudo
By default, Docker requires root privileges. Add your user to the docker group to run commands without sudo:
sudo usermod -aG docker $USER
Log out and back in for the change to take effect, or run:
newgrp docker
Verify it works:
docker run hello-world
This downloads a test image and runs it. If you see “Hello from Docker!”, everything is working.
Docker Concepts
Images vs Containers
| Concept | Description |
|---|---|
| Image | A read-only template containing the application and dependencies |
| Container | A running instance of an image |
Think of an image as a class and a container as an object. You can run multiple containers from the same image.
Common Commands
Images:
docker images # List downloaded images
docker pull nginx # Download an image
docker rmi nginx # Remove an image
Containers:
docker ps # List running containers
docker ps -a # List all containers (including stopped)
docker run -d nginx # Run container in background
docker stop <container-id> # Stop a container
docker rm <container-id> # Remove a container
docker logs <container-id> # View container logs
docker exec -it <id> bash # Open shell inside container
Running a Container
Basic example - run nginx web server:
docker run -d -p 8080:80 --name my-nginx nginx
| Flag | Purpose |
|---|---|
-d | Run in background (detached mode) |
-p 8080:80 | Map host port 8080 to container port 80 |
--name my-nginx | Give the container a name |
nginx | The image to use |
Visit http://<vps-ip>:8080 to see the nginx welcome page.
Stop and remove when done:
docker stop my-nginx
docker rm my-nginx
Cleanup
Docker can accumulate unused data. Clean up periodically:
docker system prune # Remove unused containers, networks, images
docker system prune -a # Also remove unused images
docker volume prune # Remove unused volumes
Check disk usage:
docker system df
Docker Compose
Overview
Docker Compose is a tool for defining and running multi-container applications. Instead of managing containers individually with multiple docker run commands, you define your entire application stack in a single YAML file.
This makes it easy to:
- Start/stop all services with one command
- Define relationships between containers
- Share configurations across team members
- Recreate consistent environments
Prerequisites
- Docker installed and configured (see Docker Setup)
Example: Web App with Database
Create a docker-compose.yml file:
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html
depends_on:
- db
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
This defines:
- A web server (nginx) on port 80
- A PostgreSQL database with persistent storage
- The web server waits for the database to start first
Start all services:
docker compose up -d
Stop and remove all services:
docker compose down
View running services and logs:
docker compose ps # List running services
docker compose logs # View logs from all services
docker compose logs -f web # Follow logs for specific service
Volumes
Named Volumes
Docker manages the storage location. Data persists even when containers are removed:
services:
db:
image: postgres:15
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
Bind Mounts
Map a host directory to a container directory. Changes on the host immediately appear in the container:
services:
web:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
This is useful for development - edit files locally and see changes immediately.
Volume Commands
List all volumes:
docker volume ls
Show volume details:
docker volume inspect <name>
Remove a volume:
docker volume rm <name>
Remove all unused volumes:
docker volume prune
Networking
Containers in the same Compose file can communicate using service names as hostnames:
services:
web:
image: myapp
environment:
DATABASE_URL: postgres://db:5432/myapp
db:
image: postgres:15
The web container can reach the database at db:5432 (not localhost). Docker Compose automatically creates a network for all services.
Environment Variables
Pass environment variables to containers:
services:
app:
image: myapp
environment:
NODE_ENV: production
API_KEY: secret123
DATABASE_URL: postgres://db:5432/myapp
Or load from a file:
services:
app:
image: myapp
env_file:
- .env
Building Custom Images
Build images from a Dockerfile:
services:
app:
build: .
ports:
- "3000:3000"
Or specify build context and Dockerfile:
services:
app:
build:
context: ./app
dockerfile: Dockerfile.prod
ports:
- "3000:3000"
Headscale Setup
Overview
Headscale is a self-hosted, open-source implementation of the Tailscale control server. It creates a mesh VPN that lets your devices communicate securely as if they were on the same local network, regardless of where they are.
Important: Headscale is a coordination server only. It manages authentication, distributes encryption keys, and helps devices find each other. It does NOT route your traffic - devices connect directly to each other (P2P).
┌─────────────────────────────────────────────────────────┐
│ Headscale Server │
│ (Control Plane) │
│ │
│ Coordinates connections, manages authentication, │
│ distributes keys - but doesn't route traffic │
└─────────────────────────────────────────────────────────┘
│
┌────────────────┼────────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Laptop │◄───►│ Phone │◄───►│ Server │
│ 100.64.x│ │ 100.64.x│ │ 100.64.x│
└─────────┘ └─────────┘ └─────────┘
│ │ │
└────────────────┴────────────────┘
Direct P2P connections
(encrypted, no central routing)
With Headscale, you control your own coordination server instead of using Tailscale’s hosted service. Each device that connects to Headscale runs the Tailscale client.
Prerequisites
- VPS setup completed (see VPS Setup)
- UFW configured (see UFW Setup)
- nginx installed and configured (see nginx Setup)
- A domain pointed to your VPS IP (required for SSL)
- Port 443 available
Installation
Follow the official Headscale setup guide: https://headscale.net/stable/setup/requirements
Remember to configure config.yaml before starting the service.
Expose with HTTPS
Headscale runs on port 8080 by default and requires HTTPS to work properly.
Set up nginx as a reverse proxy (see nginx Setup) and obtain an SSL certificate (see SSL Setup).
Important for Headscale: Enable WebSocket support by adding these headers to your nginx config:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
# In your server block location:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
This is required because Tailscale clients use WebSockets for real-time communication.
User Management
Headscale organizes devices by user. Create a user before connecting any devices:
headscale users create <username>
List all users:
headscale users list
Connect Devices
To connect devices to your Headscale network, see Tailscale Client.
Common Commands
| Command | Description |
|---|---|
headscale users create <name> | Create a new user |
headscale users list | List all users |
headscale nodes list | List all connected devices |
headscale preauthkeys create --user <name> | Generate auth key |
headscale nodes delete --identifier <id> | Remove a device |
Tailscale Client
Overview
The Tailscale client is the software that runs on each device in your Headscale network. It handles:
- Registering with your Headscale control server
- Establishing encrypted P2P connections with other devices
- Managing the virtual network interface
Architecture:
┌─────────────────────────────────────────────────────┐
│ Headscale Control Server │
│ │
│ • Manages authentication │
│ • Distributes encryption keys │
│ • Coordinates device discovery │
│ • Does NOT route your traffic │
└──────────────────┬──────────────────────────────────┘
│
│ (register & coordinate)
│
┌──────────┼──────────┬──────────┐
│ │ │ │
▼ ▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
│ Laptop │ │ Phone │ │ Server │ │ Etc │
│(Client)│ │(Client)│ │(Client)│ │(Client)│
└───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘
│ │ │ │
└──────────┴──────────┴──────────┘
P2P Encrypted Connections
(clients talk directly, not through server)
Key concepts:
- The Tailscale client runs on every device - your laptop, phone, servers, etc.
- Each device connects to Headscale to register, then communicates directly with other devices
- You can run the Tailscale client on the same machine as your Headscale server - they are separate services (Headscale coordinates, the client participates in the mesh)
Prerequisites
- Headscale server set up and accessible (see Headscale Setup)
- A device you want to add to your network
- Headscale server URL (e.g.,
https://headscale.example.com)
Installation
Install the Tailscale client on the device you want to connect. Follow the official installation guide for your operating system: https://tailscale.com/kb/1347/installation
Generate Pre-Authentication Key
On your Headscale server, create a pre-authentication key for the device.
First, ensure you have a user:
headscale users create <username>
Generate a key that expires in 1 hour:
headscale preauthkeys create --user <username> --expiration 1h
Copy the generated key. You’ll use this to authenticate the device.
Connect Device to Headscale
On the device with Tailscale installed, connect to your Headscale server:
tailscale up --login-server https://<headscale-domain> --authkey <key>
Replace:
<headscale-domain>with your Headscale server URL<key>with the pre-authentication key you generated
The device will:
- Connect to your Headscale server
- Register using the provided key
- Join your private network
- Get assigned a Tailscale IP address (typically in the
100.64.x.xrange)
Verify Connection
On your Headscale server, list all connected devices:
headscale nodes list
You should see your newly connected device with:
- Device name
- User it belongs to
- Tailscale IP address
- Last seen timestamp
On the client device, check Tailscale status:
tailscale status
This shows all devices in your network and their Tailscale IP addresses.
Test Connectivity
From your newly connected device, ping another device in the network:
ping <other-device-tailscale-ip>
Or SSH to another device using its Tailscale IP:
ssh user@<tailscale-ip>
This works even if devices are behind NAT or firewalls - that’s the power of Tailscale’s mesh network.
Common Commands
| Command | Description |
|---|---|
tailscale status | View connection status and peer list |
tailscale ip -4 | Show your device’s Tailscale IP address |
tailscale down | Stop Tailscale (device stays registered) |
tailscale up | Reconnect (no re-authentication needed) |
Remove Device from Network
To permanently remove a device from your Headscale network, run this on the Headscale server:
headscale nodes list
headscale nodes delete --identifier <node-id>
Notes
- Each device needs the client: Install Tailscale on every device you want in the network
- One-time setup: After initial connection, devices auto-reconnect
- Cross-platform: Tailscale clients work the same way across all platforms
- Direct connections: Devices communicate P2P - traffic doesn’t go through Headscale server
- Pre-auth keys expire: Generate a new key for each device you add
Exit Node
Overview
An exit node is any device in your Tailscale/Headscale network that routes internet traffic for other devices. When you connect through an exit node, your internet traffic appears to come from that device’s location.
┌─────────────────┐
│ Headscale │ (Coordination server only)
│ Control Server │ (Does NOT route traffic)
└─────────────────┘
│
│ (coordinates)
│
┌────┴────┐
│ │
▼ ▼
┌─────────┐ ┌─────────────┐
│ Laptop │ │ VPS │
│ │ │ (Exit Node) │
└─────────┘ └──────┬──────┘
│ │
│ ▼
└──► routes ──► Internet
traffic (appears from VPS IP)
Key concepts:
- Headscale server: Coordinates the network, doesn’t route traffic
- Exit node: Any device in your network configured to route traffic
- Client device: Any device using the exit node for internet access
Exit nodes can run on:
- Your VPS (common setup for stable IP)
- Home server (useful for accessing local network)
- Any other machine in your Tailscale network
Prerequisites
- Headscale setup completed (see Headscale Setup)
- Tailscale client installed on exit node machine (see Tailscale Client)
- A machine where you want to configure as exit node
Enable IP Forwarding
The machine must forward packets between its network interface and the Tailscale interface:
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
This tells the kernel to route traffic between interfaces instead of dropping it.
Configure Firewall
Allow traffic on the Tailscale interface.
If using UFW (see UFW Setup):
sudo ufw allow in on tailscale0
sudo ufw route allow in on tailscale0
The first rule allows incoming connections on the Tailscale interface. The second allows routing/forwarding traffic through it.
Connect to Headscale as Exit Node
Generate a pre-authentication key on your Headscale server:
headscale preauthkeys create --user <username> --expiration 1h
On the exit node machine, connect to Headscale with the --advertise-exit-node flag:
sudo tailscale up --login-server https://<headscale-domain> --advertise-exit-node --authkey <key>
This registers the machine with Headscale and advertises it as an exit node.
Verify the node is connected:
headscale nodes list
You should see your exit node listed with its Tailscale IP address.
Use the Exit Node from Client Devices
On any other device in your Headscale network, route traffic through the exit node:
tailscale up --exit-node=<exit-node-tailscale-ip>
Find the exit node’s Tailscale IP with headscale nodes list on the Headscale server.
All internet traffic from that device now goes through the exit node.
Verify Exit Node is Working
Check your public IP from the client device:
curl ifconfig.me
This should show the exit node’s public IP address, not the client’s original IP.
Stop Using Exit Node
To stop routing through the exit node:
tailscale up --exit-node=
The device will resume using its own internet connection.
Notes
- Headscale vs Exit Node: Headscale coordinates the network but doesn’t route traffic. Exit nodes do the actual traffic routing.
- Multiple exit nodes: You can have multiple exit nodes in your network. Choose which one to use on a per-device basis.
- Performance: Traffic goes directly from client → exit node → internet (not through Headscale server).
- Location flexibility: Exit nodes can be anywhere - your VPS for a stable IP, home server for LAN access, etc.