Introduction
Welcome! This is a collection of notes documenting server infrastructure setup and configuration by Ilham Aulia Majid.
Whether you’re setting up your first VPS or exploring self-hosting, these guides walk you through each step with explanations of what you’re doing and why.
What This Covers
| Topic | What You’ll Learn |
|---|---|
| VPS Basics | Securing a fresh server, SSH keys, firewall setup |
| Containerization | Running applications in isolated Docker containers |
| VPN | Creating a private network between your devices with Tailscale |
| Web Server | Serving websites and reverse proxying with Caddy |
| Services | Deploying n8n, Beszel, Garage, and more |
Key Concepts
Before diving in, here are some terms you’ll see throughout these guides:
- VPS (Virtual Private Server): A remote computer you rent from a cloud provider (like DigitalOcean, Linode, or Vultr) that runs 24/7.
- SSH (Secure Shell): A protocol for securely connecting to and controlling remote servers from your terminal.
- Domain Name: A human-readable address like
example.comthat points to your server’s IP address. You buy these from domain registrars. - DNS (Domain Name System): The system that translates domain names into IP addresses (like a phonebook for the internet).
- Reverse Proxy: A server that sits between users and your applications. It receives requests from users and forwards them to the right application — like a receptionist directing visitors.
- Container: A lightweight, isolated environment that packages an application with everything it needs to run. Think of it as a mini virtual machine that shares the host’s operating system.
- sudo: Short for “superuser do” — a command that lets you run commands with administrator privileges.
- systemd: The system that manages services on most modern Linux distributions.
systemctlis the command you use to interact with it.
How to Use
The documentation is organized in a logical progression — start with VPS Basics for a secure foundation, then proceed through the chapters based on your needs. Each guide includes prerequisites, step-by-step instructions, and common commands for reference.
About This Site
This documentation is rendered as a browsable website using mdBook — a tool that converts Markdown files into a searchable, navigable book format. It’s like reading a book, but with hyperlinks, search, and code blocks.
VPS Setup
Overview
This guide covers the initial security setup for a fresh VPS. A new VPS typically comes with password authentication enabled and no protection against attacks. We’ll secure it by:
- Updating the system packages
- Creating a non-root user with sudo access
- Setting up SSH key authentication (more secure than passwords)
- Configuring automatic security updates (keeps the system patched)
- Installing Fail2Ban (blocks brute-force attacks)
After completing this guide, your VPS will have a solid security foundation for hosting services.
Prerequisites
- A VPS running Ubuntu Server (commands should be similar on other Debian-based distributions)
- Root or sudo access
- SSH client on your local machine (Terminal on macOS/Linux, or Windows Terminal)
Initial System Update
Before doing anything else, update your system’s package list and install pending updates. This ensures you start with the latest security patches.
sudo apt update && sudo apt upgrade -y
What is
sudo? It’s short for “superuser do” — it runs commands with administrator privileges. You’ll usesudofrequently when configuring your server.
Create a Non-Root User
Many VPS providers give you a default user (like ubuntu or root). It’s best practice to create your own non-root user for daily use.
Check if a user already exists
whoami
If you see root or a provider-created username, you can either use that or create a new one.
Create a new user
sudo adduser <username>
Set a password and fill in the optional details (you can press Enter to skip them).
Give the user sudo access
sudo usermod -aG sudo <username>
This allows the user to run commands with sudo (administrator privileges).
Switch to the new user
su - <username>
From here, all commands assume you’re logged in as this non-root user.
SSH Setup
SSH keys are more secure than passwords because they can’t be guessed or brute-forced. The key pair consists of a private key (stays on your machine) and a public key (goes on the server).
Generate a Key Pair
On your local machine, generate an ed25519 key:
ssh-keygen -t ed25519
Press Enter to accept the default location. Optionally set a passphrase for extra security.
This creates two files:
~/.ssh/id_ed25519- your private key (never share this)~/.ssh/id_ed25519.pub- your public key (safe to share)
Copy the Public Key to VPS
ssh-copy-id <username>@<vps-ip>
This appends your public key to the server’s ~/.ssh/authorized_keys file. You’ll need to enter your password one last time.
If
ssh-copy-idfails (e.g., your VPS only supports key authentication from the provider), copy the contents of~/.ssh/id_ed25519.pubmanually and append it to the server’s~/.ssh/authorized_keysfile:cat ~/.ssh/id_ed25519.pub # Copy the output, then paste it into the VPS: echo "<paste-your-public-key-here>" >> ~/.ssh/authorized_keys
Configure SSH Client
Add this to ~/.ssh/config on your local machine to simplify connections:
Host *
AddKeysToAgent yes
IdentitiesOnly yes
ServerAliveInterval 60
Host vps
HostName <vps-ip>
User <username>
Port 22
IdentityFile ~/.ssh/id_ed25519
# UseKeychain yes # macOS only — uncomment to store key in macOS Keychain
Host github.com
HostName github.com
User git
| Setting | Purpose |
|---|---|
AddKeysToAgent yes | Automatically add keys to SSH agent |
IdentitiesOnly yes | Only use explicitly configured keys |
ServerAliveInterval 60 | Send keepalive every 60 seconds to prevent disconnection |
Why isn’t
IdentityFilein theHost *block? Putting it there would force ALL SSH connections to use that key, which can break connections to other servers. Keep it specific to each host.
macOS users: Uncomment
UseKeychain yesto store your key passphrase in the macOS Keychain, so you don’t need to enter it every time.
Now you can connect with just:
ssh vps
No password needed.
Disable Password Authentication
Now that SSH keys are working, disable password-based login for better security. This prevents brute-force attacks.
Edit the SSH server config:
sudo vim /etc/ssh/sshd_config
Find and change these lines (remove the # if they’re commented out):
PermitRootLogin no
PasswordAuthentication no
ChallengeResponseAuthentication no
What do these do?
PermitRootLogin no: Blocks direct root login. Always use your user account withsudoinstead.PasswordAuthentication no: Disables password login — only SSH keys work.ChallengeResponseAuthentication no: Disables another password-based auth method.
Restart SSH to apply:
sudo systemctl restart sshd
⚠️ Important: Test that SSH keys work before closing your current session! Open a new terminal and run
ssh vpsto verify. If something went wrong, you can still use the existing session to fix the config.
Timezone
Set the server timezone so logs show the correct local time:
sudo timedatectl set-timezone Asia/Jakarta
Verify with:
timedatectl
Replace Asia/Jakarta with your timezone. List available timezones with timedatectl list-timezones.
Auto Security Updates
Security vulnerabilities are discovered regularly. Unattended-upgrades automatically installs security patches so you don’t have to manually update.
Install and configure:
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades
Select “Yes” when prompted to enable automatic updates.
The system will now:
- Check for security updates daily
- Install them automatically
- Keep your system patched without intervention
Configuration (Optional)
Config file location: /etc/apt/apt.conf.d/50unattended-upgrades
To enable automatic reboots when required (e.g., kernel updates), add:
Unattended-Upgrade::Automatic-Reboot "true";
View update logs at: /var/log/unattended-upgrades/unattended-upgrades.log
Fail2Ban
Fail2Ban monitors log files for failed login attempts. When it detects repeated failures from an IP address, it bans that IP by adding a firewall rule.
This protects against brute-force SSH attacks where attackers try thousands of password combinations.
Install and Enable
sudo apt install -y fail2ban
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
Check Status
View all active jails:
sudo fail2ban-client status
View SSH jail specifically (shows banned IPs):
sudo fail2ban-client status sshd
Unban an IP
If you accidentally get banned (e.g., too many failed logins):
sudo fail2ban-client set sshd unbanip <ip>
Custom Settings (Optional)
The default settings work well for most cases. To customize, create a local config:
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo vim /etc/fail2ban/jail.local
| Setting | Default | Description |
|---|---|---|
maxretry | 5 | Number of failures before ban |
bantime | 10m | How long the ban lasts |
findtime | 10m | Time window for counting failures |
Example: With defaults, 5 failed logins within 10 minutes triggers a 10-minute ban.
SSH jail (sshd) is enabled by default - no extra configuration needed.
UFW Setup
Overview
UFW (Uncomplicated Firewall) is a frontend for iptables that controls which ports are accessible from the internet. By default, a VPS has all ports open. UFW lets you block everything except the services you explicitly allow.
Think of it as a gatekeeper: all incoming traffic is denied unless you create a rule to allow it.
How It Works
Internet Traffic
│
▼
┌─────────────────┐
│ UFW Firewall │
│ │
│ Port 22 ✓ ───────► SSH
│ Port 80 ✓ ───────► nginx (HTTP)
│ Port 443 ✓ ───────► nginx (HTTPS)
│ Port 3000 ✗ (blocked)
│ Port 5432 ✗ (blocked)
│ │
└─────────────────┘
Only ports with explicit “allow” rules pass through. Everything else is blocked.
⚠️ Docker Bypasses UFW
Docker directly manipulates iptables, bypassing UFW entirely. If you publish a port in Docker (e.g., ports: - "8080:80"), it’s accessible from the public internet regardless of UFW rules.
To restrict Docker ports to specific networks, see Tailscale-Only Services. UFW still protects non-Docker services (SSH, etc.).
Prerequisites
- VPS setup completed (see VPS Setup)
Installation
sudo apt install -y ufw
Basic Setup
Before enabling UFW, you must allow SSH. Otherwise you will lock yourself out of the server.
sudo ufw allow OpenSSH
This creates a rule allowing incoming connections on port 22 (SSH).
Now enable the firewall:
sudo ufw enable
UFW is now active. All incoming traffic is blocked except SSH.
Verify with:
sudo ufw status
Reading Status Output
To see all rules with numbers (useful for deleting rules later):
sudo ufw status numbered
Understanding the Output
Columns:
- To: Where the traffic is going (destination)
- Action: What UFW does (ALLOW IN/OUT/FWD)
- From: Where the traffic comes from (source)
Actions:
- ALLOW IN: Incoming connections to your server (e.g., SSH, web traffic)
- ALLOW OUT: Outgoing connections from your server (e.g., downloading updates)
- ALLOW FWD: Traffic routing/forwarding through your server (e.g., VPN traffic)
Example Breakdown
[ 1] OpenSSH ALLOW IN Anywhere
→ Allow SSH connections from anywhere to your server (port 22)
[ 2] Nginx Full ALLOW IN Anywhere
→ Allow HTTP/HTTPS connections from anywhere to your server (ports 80 and 443)
[ 3] Anywhere on tailscale0 ALLOW IN Anywhere
→ Allow any incoming traffic on the Tailscale interface (VPN traffic)
[ 4] Anywhere ALLOW FWD Anywhere on tailscale0
→ Allow forwarding traffic FROM Tailscale interface to anywhere (VPN routing)
[ 5] Anywhere ALLOW OUT Anywhere on tailscale0 (out)
→ Allow outgoing traffic TO the Tailscale interface
[ 6] Anywhere on eth0 ALLOW FWD Anywhere on tailscale0
→ Allow forwarding FROM Tailscale TO eth0 (VPN to internet)
[ 7] Anywhere on tailscale0 ALLOW FWD Anywhere on eth0
→ Allow forwarding FROM eth0 TO Tailscale (internet to VPN)
IPv6 Rules:
Rules with (v6) are the same rules but for IPv6 traffic. UFW creates matching IPv6 rules for every IPv4 rule. For example, if rule [1] is OpenSSH ALLOW IN Anywhere, you’ll also see [1] (v6) OpenSSH ALLOW IN Anywhere (v6) — the same rule applied to both IP versions.
Adding Rules
There are several ways to allow traffic through the firewall.
By service name (UFW knows common services):
sudo ufw allow OpenSSH # Port 22
sudo ufw allow 'Nginx Full' # Ports 80 and 443
By port number (when you need a specific port):
sudo ufw allow 80/tcp # HTTP
sudo ufw allow 443/tcp # HTTPS
The /tcp suffix specifies the protocol. Use /udp for UDP traffic.
By port range (for services using multiple ports):
sudo ufw allow 6000:6007/tcp
This allows ports 6000 through 6007.
By interface (for VPN routing, like Tailscale):
sudo ufw allow in on tailscale0
sudo ufw route allow in on tailscale0
This allows traffic on the tailscale0 network interface and permits routing through it.
Removing Rules
First, list rules with numbers:
sudo ufw status numbered
Then delete by number:
sudo ufw delete 3
This removes rule number 3. Rule numbers shift after deletion, so always re-check with status numbered before deleting another.
Alternatively, delete by specification (exactly as you added it):
sudo ufw delete allow 80/tcp
Common Rules Reference
| Service | Command | What it allows |
|---|---|---|
| SSH | sudo ufw allow OpenSSH | Remote terminal access (port 22) |
| HTTP | sudo ufw allow 80/tcp | Web traffic, unencrypted |
| HTTPS | sudo ufw allow 443/tcp | Web traffic, encrypted |
| HTTP + HTTPS | sudo ufw allow 'Nginx Full' | Both web ports at once |
| Ping | sudo ufw allow proto icmp | ICMP ping requests |
Notes
- UFW blocks all incoming traffic by default (deny policy)
- Ping (ICMP) is blocked by default
- Rules persist across reboots
- Always allow SSH before enabling UFW, or you will lose access
- When in doubt, check
sudo ufw statusbefore and after changes
Docker Setup
Overview
Docker is a platform for running applications in isolated containers. A container packages an application with all its dependencies, ensuring it runs the same way everywhere.
┌─────────────────────────────────────────────────────┐
│ Your VPS │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Container 1 │ │ Container 2 │ │ Container 3 │ │
│ │ Node.js │ │ PostgreSQL │ │ Redis │ │
│ │ :3000 │ │ :5432 │ │ :6379 │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌───────────────────────────────────────────────┐ │
│ │ Docker Engine │ │
│ └───────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────┐ │
│ │ Linux Kernel │ │
│ └───────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
Benefits:
- Consistent environments (no “works on my machine” issues)
- Easy deployment and rollback
- Isolation between applications
- Simple dependency management
Prerequisites
Installation
Follow the official Docker installation guide: https://docs.docker.com/engine/install/
Run Docker Without sudo
By default, Docker requires root privileges. Add your user to the docker group to run commands without sudo:
sudo usermod -aG docker $USER
Log out and back in for the change to take effect, or run:
newgrp docker
Verify it works:
docker run hello-world
This downloads a test image and runs it. If you see “Hello from Docker!”, everything is working.
⚠️ Important: Docker and Firewall Interaction
Docker directly manipulates iptables to expose container ports, which means it bypasses UFW firewall rules. If you publish a port in a Docker Compose file (e.g., ports: - "8080:80"), that port is accessible from the public internet regardless of your UFW configuration.
To restrict Docker-published ports to specific networks (like Tailscale), see Tailscale-Only Services.
Docker Concepts
Images vs Containers
| Concept | Description |
|---|---|
| Image | A read-only template containing the application and dependencies |
| Container | A running instance of an image |
Think of an image as a class and a container as an object. You can run multiple containers from the same image.
Container Restart Policies
When defining containers in docker-compose.yml, you’ll often see restart: unless-stopped:
services:
myapp:
image: nginx
restart: unless-stopped # Restart unless you explicitly stop it
Common restart policies:
| Policy | Behavior |
|---|---|
no | Never restart automatically |
always | Always restart, even if you manually stopped it |
unless-stopped | Restart unless you explicitly stopped it (most common) |
on-failure | Restart only if the container exits with a non-zero code |
Common Commands
Images:
docker images # List downloaded images
docker pull nginx # Download an image
docker rmi nginx # Remove an image
Containers:
docker ps # List running containers
docker ps -a # List all containers (including stopped)
docker run -d nginx # Run container in background
docker stop <container-id> # Stop a container
docker rm <container-id> # Remove a container
docker logs <container-id> # View container logs
docker exec -it <id> bash # Open shell inside container
Running a Container
Basic example - run nginx web server:
docker run -d -p 8080:80 --name my-nginx nginx
| Flag | Purpose |
|---|---|
-d | Run in background (detached mode) |
-p 8080:80 | Map host port 8080 to container port 80 |
--name my-nginx | Give the container a name |
nginx | The image to use |
Visit http://<vps-ip>:8080 to see the nginx welcome page.
Stop and remove when done:
docker stop my-nginx
docker rm my-nginx
Cleanup
Docker can accumulate unused data. Clean up periodically:
docker system prune # Remove unused containers, networks, images
docker system prune -a # Also remove unused images
docker volume prune # Remove unused volumes
Check disk usage:
docker system df
Docker Compose
Overview
Docker Compose is a tool for defining and running multi-container applications. Instead of managing containers individually with multiple docker run commands, you define your entire application stack in a single YAML file.
This makes it easy to:
- Start/stop all services with one command
- Define relationships between containers
- Share configurations across team members
- Recreate consistent environments
Prerequisites
- Docker installed and configured (see Docker Setup)
Example: Web App with Database
Create a docker-compose.yml file:
services:
web:
image: caddy:2
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
depends_on:
- db
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
caddy_data:
caddy_config:
This defines:
- A web server (Caddy) on ports 80 and 443 with automatic HTTPS
- A PostgreSQL database with persistent storage
- The web server waits for the database to start first
Start all services:
docker compose up -d
Stop and remove all services:
docker compose down
View running services and logs:
docker compose ps # List running services
docker compose logs # View logs from all services
docker compose logs -f web # Follow logs for specific service
Volumes
Named Volumes
Docker manages the storage location. Data persists even when containers are removed:
services:
db:
image: postgres:15
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
Bind Mounts
Map a host directory to a container directory. Changes on the host immediately appear in the container:
services:
web:
image: caddy:2
volumes:
- ./html:/usr/share/caddy
This is useful for development - edit files locally and see changes immediately.
Volume Commands
List all volumes:
docker volume ls
Show volume details:
docker volume inspect <name>
Remove a volume:
docker volume rm <name>
Remove all unused volumes:
docker volume prune
Networking
Containers in the same Compose file can communicate using service names as hostnames:
services:
web:
image: myapp
environment:
DATABASE_URL: postgres://db:5432/myapp
db:
image: postgres:15
The web container can reach the database at db:5432 (not localhost). Docker Compose automatically creates a network for all services.
Environment Variables
Pass environment variables to containers:
services:
app:
image: myapp
environment:
NODE_ENV: production
API_KEY: secret123
DATABASE_URL: postgres://db:5432/myapp
Or load from a file:
services:
app:
image: myapp
env_file:
- .env
Using .env Files (Best Practice)
For sensitive values like passwords and API keys, use a .env file instead of hardcoding them in your compose file:
-
Create
.envin the same directory as yourdocker-compose.yml:cat > .env << 'EOF' DB_PASSWORD=mysecretpassword API_KEY=myapikey EOF -
Reference variables in your compose file:
services: app: image: myapp environment: - DB_PASSWORD=${DB_PASSWORD} - API_KEY=${API_KEY} -
Always add
.envto your.gitignoreto prevent accidentally committing secrets:echo ".env" >> .gitignore
Building Custom Images
Build images from a Dockerfile:
services:
app:
build: .
ports:
- "3000:3000"
Or specify build context and Dockerfile:
services:
app:
build:
context: ./app
dockerfile: Dockerfile.prod
ports:
- "3000:3000"
Tailscale Setup
Overview
Tailscale is a mesh VPN that lets your devices communicate securely as if they were on the same local network, regardless of location. Unlike traditional VPNs, Tailscale uses WireGuard to create direct P2P connections between devices — no central server routing your traffic.
┌─────────────────────────────────────────────────────────┐
│ Tailscale Cloud │
│ (Coordination) │
│ │
│ Manages authentication, distributes keys, │
│ helps devices find each other │
└─────────────────────────────────────────────────────────┘
│
┌────────────────┼────────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Laptop │◄───►│ Phone │◄───►│ Server │
│ 100.x.x │ │ 100.x.x │ │ 100.x.x │
└─────────┘ └─────────┘ └─────────┘
│ │ │
└────────────────┴────────────────┘
Direct P2P connections
(encrypted, no central routing)
This guide uses Tailscale’s free hosted service — just install the client and log in.
What is a “tailnet”? Your Tailscale network is called a “tailnet.” It’s your private virtual network that all your Tailscale-connected devices join. Each tailnet gets its own private IP range (typically
100.64.x.x), and devices can communicate as if they were on the same local network, even across the internet.
Prerequisites
- VPS setup completed (see VPS Setup)
- UFW configured (see UFW Setup)
- A Tailscale account (sign up at tailscale.com)
Install Tailscale Client
Linux
curl -fsSL https://tailscale.com/install.sh | sh
macOS
brew install tailscale
Or download from tailscale.com.
Mobile
Install from App Store (iOS) or Play Store (Android).
Log In
Start Tailscale and authenticate via browser:
sudo tailscale up
This opens your default browser asking you to log in with your Tailscale account. Once authenticated, your device joins your Tailscale network.
Options
# Advertise as exit node (route all traffic through this device)
sudo tailscale up --advertise-exit-node
# Use a custom Tailscale name instead of hostname
sudo tailscale up --hostname my-server
# Accept routes to your local network (if advertised)
sudo tailscale up --accept-routes
Connect Other Devices
Install Tailscale on each device and run tailscale up with the same account. Devices automatically discover each other and create direct P2P connections.
Check Status
tailscale status
Shows all connected devices, their Tailscale IPs, and connection type (direct vs relay).
Exit Node
Any device can become an exit node to route internet traffic for other devices:
-
Enable on the exit node device:
sudo tailscale up --advertise-exit-node -
Approve it in the Tailscale admin console at tailscale.com/admin
-
Connect other devices through it:
tailscale up --exit-node=<exit-node-ip>
Key Commands
| Command | Description |
|---|---|
tailscale up | Start Tailscale |
tailscale down | Stop Tailscale |
tailscale status | Show connected devices |
tailscale ip -4 | Show your Tailscale IPv4 |
tailscale logout | Log out of Tailscale |
Tailscale-Only Services
Overview
By default, Docker publishes ports to all network interfaces (0.0.0.0), making services reachable from both the public internet and your Tailscale network. This guide shows how to restrict specific services to your Tailscale network without modifying docker-compose.yml, Caddy, or domain DNS.
Prerequisites
- Tailscale Setup — your VPS must be joined to your tailnet
- Docker Compose — services running in Docker
- UFW Setup — firewall basics (optional but recommended)
- SSH access to your VPS
Before You Begin
1. Check your iptables backend
On Ubuntu 22.04+, there are two iptables backends: iptables-nft (nftables-based) and iptables-legacy. Docker uses iptables-nft by default. If your iptables command points to the legacy version, your rules will silently fail — they’ll appear in the output but won’t actually affect Docker traffic.
Check which backend you’re using:
sudo update-alternatives --display iptables
If the output shows iptables-legacy, switch to iptables-nft:
sudo update-alternatives --set iptables /usr/sbin/iptables-nft
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
2. Find your actual public network interface
ip route | grep default
Look for the interface name after dev. Common names include eth0, ens3, ens5, or eth1. Use this exact name in the commands below — using the wrong interface name is the most common reason these rules don’t work.
Example output:
default via 192.168.1.1 dev ens3 proto dhcp src 15.235.186.232 metric 100
→ Your interface is ens3. Use -i ens3 in all commands below.
How It Works
Docker’s nat table rewrites the destination port (DNAT) before the packet reaches the DOCKER-USER chain. This is the key concept:
Your compose: ports: "5301:8090"
↓
Internet arrives: ens3:5301
↓
Docker DNAT: 5301 → 8090 (rewritten)
↓
DOCKER-USER sees: destination port 8090 (NOT 5301)
So your iptables rules must match the container port (right side of host:container), not the host port (left side).
Steps
3. Add iptables rules
First, allow all Tailscale traffic. Then block public traffic to specific container ports:
# Allow Tailscale traffic on all ports (must come first)
sudo iptables -I DOCKER-USER -i tailscale0 -j ACCEPT
# Block Beszel (compose has "5301:8090" → match container port 8090)
sudo iptables -I DOCKER-USER -i ens3 -p tcp --dport 8090 -j DROP
# Block n8n (compose has "5302:5678" → match container port 5678)
sudo iptables -I DOCKER-USER -i ens3 -p tcp --dport 5678 -j DROP
⚠️ Critical: Use the container port (right side), not the host port (left side). A rule with
--dport 5301will never match — the packet counter stays at 0. Docker already rewrote it to8090before this chain runs.
You do not need to change docker-compose.yml or restart containers.
4. Persist rules across reboots
sudo apt install -y iptables-persistent
sudo netfilter-persistent save
Select Yes when prompted to save current IPv4 and IPv6 rules.
5. Verify the rules are active
Use -v (verbose) to see the interface column and packet counters:
sudo iptables -L DOCKER-USER -n --line-numbers -v
Expected output:
Chain DOCKER-USER (1 references)
num pkts bytes target prot opt in out source destination
1 112 6892 ACCEPT 0 -- tailscale0 * 0.0.0.0/0 0.0.0.0/0
2 0 0 DROP 6 -- ens3 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8090
3 0 0 DROP 6 -- ens3 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5678
The pkts counter on the DROP rules will increase each time someone tries to access those ports publicly. A counter stuck at 0 means the rule is never matching — usually because you used the host port instead of the container port.
Troubleshooting:
- If the
incolumn is blank, you forgot-i ens3. Flush and re-add.- If
pktsstays at0on DROP rules but the port is still publicly accessible, you used the host port. Flush and re-add with the container port.sudo iptables -F DOCKER-USER sudo iptables -I DOCKER-USER -i tailscale0 -j ACCEPT sudo iptables -I DOCKER-USER -i ens3 -p tcp --dport 8090 -j DROP sudo iptables -I DOCKER-USER -i ens3 -p tcp --dport 5678 -j DROP sudo netfilter-persistent save
6. Find your Tailscale address
Option A: Tailscale IP (always works)
Get your VPS’s Tailscale IP:
tailscale ip -4
# → 100.64.x.x
Use it directly:
http://100.64.x.x:8090
This works regardless of DNS configuration.
Option B: Magic DNS (if enabled)
If your tailnet has Magic DNS enabled, Tailscale assigns each machine a name. Check yours:
tailscale status
The output shows your machine name (e.g., vps). Depending on your tailnet’s DNS setup, you may be able to reach it as:
http://vps:8090
Or with a full domain if your tailnet uses one (e.g., vps.your-tailnet.ts.net for hosted Tailscale, or a custom domain for Headscale).
If you’re unsure whether Magic DNS is configured, use Option A (the Tailscale IP). It always works.
7. Test access
From a Tailscale-connected device:
curl -I http://<tailscale-address>:8090
# Expected: HTTP 200
From a non-Tailscale device (e.g., mobile data):
curl -I --connect-timeout 5 http://<your-vps-public-ip>:8090
# Expected: timeout / no response
Adding More Services
Whenever you deploy a new private service, find its container port (the right side of the ports mapping in docker-compose.yml) and add a DROP rule:
sudo iptables -I DOCKER-USER -i ens3 -p tcp --dport <container-port> -j DROP
sudo netfilter-persistent save
For example, if your compose has ports: - "9999:3000", block port 3000 (not 9999).
No container restarts or compose changes are required.
Removing a Rule
List current rules with line numbers:
sudo iptables -L DOCKER-USER -n --line-numbers
Delete by number:
sudo iptables -D DOCKER-USER <number>
sudo netfilter-persistent save
Key Commands
| Command | Description |
|---|---|
ip route | grep default | Find your public network interface |
sudo update-alternatives --display iptables | Check iptables backend |
sudo iptables -L DOCKER-USER -n --line-numbers -v | List rules with packet counters |
sudo iptables -I DOCKER-USER -i tailscale0 -j ACCEPT | Allow Tailscale traffic |
sudo iptables -I DOCKER-USER -i ens3 -p tcp --dport <container-port> -j DROP | Block a port from public |
sudo iptables -F DOCKER-USER | Flush all DOCKER-USER rules (start over) |
sudo iptables -D DOCKER-USER <num> | Delete a rule by line number |
sudo netfilter-persistent save | Save rules to survive reboots |
Notes
- Docker Compose files remain unchanged. The
ports:mapping stays as-is; iptables handles the restriction at the network layer. - Traffic from
tailscale0,lo, and other interfaces is not affected by the DROP rules. - This method works for any TCP service. For UDP services, replace
-p tcpwith-p udp. - Multiple containers with the same port: If two containers use the same container port (e.g., both map to
:8090), blocking that port affects both. To avoid this, either use different container ports in each compose file, or remove theports:mapping entirely for services that are only accessed through Caddy’s internal network.
Blocking a Single Container Instead of a Port
If you need to block one specific container without affecting others on the same port, block by container IP instead:
# Find the container's IP
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name>
# Block public traffic to that specific container IP
sudo iptables -I DOCKER-USER -i ens3 -d <container-ip> -j DROP
sudo netfilter-persistent save
Warning: Container IPs change when you recreate the container (
docker compose up -d --force-recreate). You’ll need to update the rule after each recreation.
Caddy Setup
Overview
Caddy is a modern web server that handles HTTPS automatically, reverse proxies, and static file serving with minimal configuration. Unlike traditional servers, Caddy obtains and renews SSL certificates from Let’s Encrypt on its own — no separate tool like Certbot required.
This guide runs Caddy as a Docker container via Docker Compose, which keeps the host clean and makes the configuration portable.
Prerequisites
- VPS setup completed (see VPS Setup)
- UFW configured (see UFW Setup)
- Docker installed and configured (see Docker Setup)
- Docker Compose installed (see Docker Compose)
- Domain name pointed to your VPS IP
What is a domain? A domain (like
example.com) is a human-readable address that points to your server’s IP address. You can buy domains from registrars like Namecheap, Cloudflare, or Google Domains. After buying a domain, you need to create anArecord in your DNS settings that points to your VPS’s IP address. Caddy needs this to verify you own the domain and issue an SSL certificate.
Firewall Rules
Caddy needs ports 80 (HTTP) and 443 (HTTPS) open so it can serve traffic and complete ACME challenges:
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
Docker Compose Deployment
Create the Caddy directory:
sudo mkdir -p /opt/caddy && cd /opt/caddy
Create the Caddyfile:
sudo vim Caddyfile
A minimal reverse proxy configuration looks like this:
<domain> {
reverse_proxy <service-name>:<port>
}
Create docker-compose.yml:
services:
caddy:
image: caddy:2
container_name: caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- caddy
restart: unless-stopped
volumes:
caddy_data:
caddy_config:
networks:
caddy:
external: true
Create the external network before starting Caddy:
sudo docker network create caddy
| Volume / Network | Purpose |
|---|---|
./Caddyfile | Your server configuration |
caddy_data | TLS certificates and state (persisted) |
caddy_config | Caddy’s internal config |
caddy (external) | Shared network for Caddy to reach other containers |
Start Caddy:
sudo docker compose up -d
Caddy will automatically obtain an SSL certificate for <domain> and begin serving HTTPS.
Connecting Other Services
Each of your services lives in its own Docker Compose project under /opt. To let Caddy reverse proxy to them, attach each service to the external caddy network.
In the service’s docker-compose.yml, add:
services:
<service-name>:
# ... existing config ...
networks:
- caddy
networks:
caddy:
external: true
Then recreate the container to join the network:
cd /opt/<service-name>
sudo docker compose up -d
Once connected, Caddy can reach the service by its Compose service name:
<domain> {
reverse_proxy <service-name>:<port>
}
Caddyfile Basics
Reverse Proxy to a Container
With the shared caddy network, use the service name from the target Compose file:
calibre.<yourdomain>.com {
reverse_proxy calibre-web:8083
}
Static File Serving
Serve files from a directory inside the Caddy container:
<domain> {
root * /usr/share/caddy
file_server
}
Mount the files into the container:
services:
caddy:
# ...
volumes:
- ./site:/usr/share/caddy
Multiple Sites
Caddy handles multiple sites in one Caddyfile:
calibre.<yourdomain>.com {
reverse_proxy calibre-web:8083
}
linkding.<yourdomain>.com {
reverse_proxy linkding:9090
}
Common Commands
cd /opt/caddy
sudo docker compose up -d # Start Caddy
sudo docker compose down # Stop and remove
sudo docker compose logs -f caddy # Follow logs
sudo docker compose restart caddy # Restart after Caddyfile changes
After editing Caddyfile, Caddy auto-reloads in most cases. If not, restart the container:
sudo docker compose restart caddy
Notes
- Caddy stores certificates in the
caddy_datavolume. Do not delete this volume unless you want to reissue certificates. - The
caddyDocker network is external so multiple Compose projects can attach to it. Create it once withdocker network create caddy. - If you need to use a wildcard certificate or a DNS provider challenge, Caddy supports those via modules, but the default HTTP/ALPN challenge works for standard domains without extra configuration.
SSH Reverse Tunnel
Overview
An SSH reverse tunnel exposes a local service to the internet through a VPS. It works by establishing an outbound SSH connection from your local machine to the VPS, which then forwards incoming traffic back through that connection to your local service.
This is useful when you are behind NAT, a firewall, or lack a public IP address.
Architecture
Internet Request
│
▼
┌─────────────────┐
│ VPS (Public) │
│ Caddy :443 │
│ │ │
│ ▼ │
│ localhost:5201 │◄── SSH tunnel listens here
└────────┬────────┘
│
SSH Connection
(outbound from local)
│
▼
┌─────────────────┐
│ Local Machine │
│ localhost:8080 │◄── Your service
└─────────────────┘
Traffic flow:
- Client requests
https://<domain> - Caddy terminates TLS and proxies to
127.0.0.1:5201 - Port 5201 is the remote end of the SSH tunnel
- Traffic flows through the tunnel to your local machine on port 8080
Prerequisites
- VPS setup completed (see VPS Setup)
- Caddy running in Docker (see Caddy Setup)
- A local service running (this guide uses
localhost:8080)
Setup
Add a reverse proxy block to your Caddyfile for the subdomain you want to expose. Caddy will automatically handle HTTPS:
tunnel.<domain> {
reverse_proxy localhost:5201
}
Restart Caddy to apply the change:
docker compose restart caddy
SSH Reverse Tunnel Command
From your local machine:
ssh -N -R 5201:localhost:8080 <username>@<vps-ip>
| Flag | Purpose |
|---|---|
-N | Do not execute a remote command. Port forwarding only. |
-R 5201:localhost:8080 | Bind remote port 5201 to local port 8080. |
Format: -R [remote_port]:[local_host]:[local_port]
The tunnel remains open while the SSH connection is active.
Docker Networking Note
If Caddy is running in a Docker container (non-host network), it may not be able to reach 127.0.0.1:5201 on the host. To fix this, either:
-
Add
GatewayPortsto the VPS SSH server config (/etc/ssh/sshd_config):GatewayPorts clientspecifiedThen use
0.0.0.0:5201in your tunnel command:ssh -N -R 0.0.0.0:5201:localhost:8080 <username>@<vps-ip> -
Use host network mode for Caddy (not recommended for production).
Observability
Stream Caddy logs to your local machine:
ssh <username>@<vps-ip> "docker logs -f caddy-caddy-1" | grep --line-buffered <domain>
Example
# Start local service
npm run dev # localhost:3000
# Establish tunnel
ssh -N -R 5201:localhost:3000 <username>@<vps-ip>
Access from anywhere: https://<domain>
n8n Setup
Overview
n8n is a free, open-source workflow automation tool. It lets you connect apps, APIs, and services to automate repetitive tasks — similar to Zapier or Make, but self-hosted and you control your data.
Prerequisites
- Caddy running in Docker (see Caddy Setup)
- External
caddyDocker network created
Docker Compose Setup
Create the n8n directory:
sudo mkdir -p /opt/n8n
cd /opt/n8n
Create a .env file to store sensitive values:
cat > .env << 'EOF'
# n8n behind Caddy reverse proxy
N8N_PROXY_HOPS=1
EOF
Create docker-compose.yml:
services:
n8n:
image: n8nio/n8n
restart: unless-stopped
environment:
- N8N_HOST=<subdomain>
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://<subdomain>
- N8N_PROXY_HOPS=${N8N_PROXY_HOPS}
volumes:
- n8n_data:/home/node/.n8n
networks:
- caddy
networks:
caddy:
external: true
volumes:
n8n_data:
Replace:
<subdomain>— your n8n domain (e.g.,n8n.yourdomain.com)
Note: n8n no longer uses
N8N_BASIC_AUTH_*environment variables (deprecated since 2023). Authentication is handled through n8n’s built-in user system — you’ll create your admin account on first login.
Start n8n:
sudo docker compose up -d
Caddy Configuration
Add to your Caddyfile:
<subdomain> {
reverse_proxy n8n:5678
}
Restart Caddy:
docker compose -f /opt/caddy/docker-compose.yml restart caddy
Access n8n
Visit https://<subdomain> and create your admin account. This is where you set up your username and password — n8n handles authentication through its built-in user system.
Security Note
By default, Docker publishes container ports to all network interfaces (0.0.0.0), which means the n8n web interface is accessible from the public internet.
If you want to restrict n8n to your Tailscale network only (recommended for admin tools), see Tailscale-Only Services.
Key Commands
docker compose -f /opt/n8n/docker-compose.yml up -d # Start
docker compose -f /opt/n8n/docker-compose.yml down # Stop
docker compose -f /opt/n8n/docker-compose.yml logs -f # View logs
Data Persistence
n8n stores data in the n8n_data Docker volume. Your workflows and credentials persist across restarts.
Beszel Setup
Overview
Beszel is a lightweight, open-source system monitoring tool. It tracks CPU, memory, disk, and network usage over time with a clean, minimal dashboard.
Unlike service uptime monitors, which alert you when websites go down, Beszel shows you how your VPS resources are trending so you can spot problems before they cause outages.
Prerequisites
- Caddy running in Docker (see Caddy Setup)
- External
caddyDocker network created
Docker Compose Setup
Create the Beszel directory:
sudo mkdir -p /opt/beszel
cd /opt/beszel
Create docker-compose.yml with both the Hub and Agent:
services:
beszel:
image: henrygd/beszel
container_name: beszel
restart: unless-stopped
environment:
APP_URL: http://localhost:8090
ports:
- "8090:8090"
volumes:
- ./beszel_data:/beszel_data
networks:
- caddy
beszel-agent:
image: henrygd/beszel-agent
container_name: beszel-agent
restart: unless-stopped
network_mode: host
volumes:
- ./beszel_agent_data:/var/lib/beszel-agent
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
LISTEN: 45876
KEY: "<public-key>"
HUB_URL: http://localhost:8090
networks:
caddy:
external: true
volumes:
beszel_data:
Why
network_mode: hostfor the agent? The agent needs direct access to the host’s network interface stats (bandwidth, connections, etc.). Host network mode gives the agent visibility into the real network. Without it, the agent only sees the container’s own network traffic, which isn’t useful for monitoring.
Get Your Agent Key
Before starting, you need the agent’s public key:
- Start the Hub first (without the agent):
sudo docker compose up -d beszel
-
Visit
https://beszel.<yourdomain>.comand create your admin account. -
Click “Add System”, enter a name (e.g.,
vps), and click Add. -
Copy the SSH public key shown — you’ll need it for the
KEYvalue in the compose file above. -
Edit
docker-compose.ymland paste the key into the agent’sKEYenvironment variable.
Note: You can also generate a reusable token in the Beszel settings (
/settings/tokens) and useTOKENandHUB_URLenv vars instead of pre-registering the system.
Start both services:
sudo docker compose up -d
Return to the Beszel dashboard. Your VPS metrics should appear within a few seconds.
Caddy Configuration
Add to your Caddyfile (/opt/caddy/Caddyfile):
beszel.<yourdomain>.com {
reverse_proxy beszel:8090
}
Replace <yourdomain>.com with your actual domain.
Restart Caddy:
docker compose -f /opt/caddy/docker-compose.yml restart caddy
Access Beszel
Visit https://beszel.<yourdomain>.com to view your dashboard.
Security Note
By default, Docker publishes container ports to all network interfaces (0.0.0.0), which means the Beszel dashboard is accessible from the public internet on port 8090.
If you want to restrict Beszel to your Tailscale network only (recommended for monitoring dashboards), see Tailscale-Only Services.
Key Commands
docker compose -f /opt/beszel/docker-compose.yml up -d # Start
docker compose -f /opt/beszel/docker-compose.yml down # Stop
docker compose -f /opt/beszel/docker-compose.yml logs -f # View logs
Data Persistence
Beszel stores its configuration and historical metrics in the beszel_data directory (bind-mounted from ./beszel_data). Data persists across container restarts and recreations.
Garage Setup
Overview
Garage is an open-source distributed object storage service compatible with the Amazon S3 API. It lets you self-host S3-compatible storage on your own infrastructure — useful for backing up files, hosting app data, or serving as object storage for self-hosted services like Nextcloud.
Unlike cloud S3 services, Garage keeps your data on your own servers with no egress fees.
Prerequisites
- Docker installed and configured (see Docker Setup)
- Docker Compose installed (see Docker Compose)
- Caddy running with external
caddynetwork (see Caddy Setup) - Domain name pointed to your VPS
Docker Compose Setup
Create the Garage directory:
sudo mkdir -p /opt/garage
cd /opt/garage
Create garage.toml:
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "sqlite"
replication_factor = 1
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "127.0.0.1:3901"
rpc_secret = "<rpc-secret>"
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.<yourdomain>.com"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.<yourdomain>.com"
index = "index.html"
[admin]
api_bind_addr = "[::]:3903"
admin_token = "<admin-token>"
metrics_token = "<metrics-token>"
Generate secrets and replace the placeholders:
openssl rand -hex 32 # rpc-secret
openssl rand -base64 32 # admin-token
openssl rand -base64 32 # metrics-token
What are these secrets?
rpc_secret: Encrypts communication between Garage nodes (only matters in multi-node clusters)admin_token: Authentication token for admin CLI commandsmetrics_token: Token for accessing Prometheus metrics
Create docker-compose.yml:
services:
garage:
image: dxflrs/garage:v2.2.0
container_name: garage
ports:
- "3900:3900"
volumes:
- ./garage.toml:/etc/garage.toml
- garage_meta:/var/lib/garage/meta
- garage_data:/var/lib/garage/data
networks:
- caddy
restart: unless-stopped
volumes:
garage_meta:
garage_data:
networks:
caddy:
external: true
Start Garage:
sudo docker compose up -d
Note: Check Docker Hub for the latest version tag. Replace
v2.2.0with the newest stable release.
Initialize Garage
Garage requires a one-time initialization before it can store data.
1. Check node status
docker exec garage /garage status
Copy the node ID from the output (first column, e.g., 563e1ac825ee3323).
2. Assign cluster layout
Replace <node-id> with the ID from the previous step:
docker exec garage /garage layout assign -z dc1 -c 1G <node-id>
docker exec garage /garage layout apply --version 1
3. Create a bucket
docker exec garage /garage bucket create my-bucket
4. Create an API key
docker exec garage /garage key create my-key
5. Allow key access to the bucket
docker exec garage /garage bucket allow --read --write my-bucket --key my-key
6. Get key credentials
docker exec garage /garage key info my-key
Save the Key ID and Secret key — you’ll need them for S3 clients.
Caddy Configuration
Garage serves static websites from buckets through the S3 web endpoint (port 3902). To expose your buckets as websites via HTTPS, add to your Caddyfile:
garage.<yourdomain>.com {
reverse_proxy garage:3902
}
Note: Port 3902 is the S3 web endpoint — it serves static website files stored in your buckets. It is NOT a web admin UI. Garage v2 does not include a graphical admin panel; you manage buckets via the CLI or S3-compatible tools.
S3 API Access
For S3 API access through Caddy, you need wildcard subdomain support. S3 clients access buckets as my-bucket.s3.<yourdomain>.com. Caddy can handle this with wildcard certificates, but requires a DNS challenge:
*.s3.<yourdomain>.com {
tls {
dns <provider>
}
reverse_proxy garage:3900
}
Setting up DNS challenges requires Caddy DNS plugins and is beyond the scope of this guide. For most use cases, accessing Garage directly via Tailscale or the awscli endpoint (shown below) is simpler.
Restart Caddy after any changes:
docker compose -f /opt/caddy/docker-compose.yml restart caddy
Access Garage
Garage is managed via the CLI or any S3-compatible client. There is no web admin UI.
Using awscli
Install awscli:
sudo apt install -y awscli
Configure your credentials (the interactive setup stores them securely in ~/.aws/credentials):
aws configure
When prompted:
- AWS Access Key ID: Your Garage key ID from step 6 above
- AWS Secret Access Key: Your Garage secret key from step 6
- Default region name:
garage - Default output format: (leave blank or type
json)
Then set the endpoint URL:
export AWS_ENDPOINT_URL=https://127.0.0.1:3900
Tip: Add the
exportline to your~/.bashrcso it’s set automatically in new sessions.
Use Garage:
aws s3 ls
aws s3 cp file.txt s3://my-bucket/
aws s3 ls s3://my-bucket/
Managing Buckets
docker exec garage /garage bucket list # List all buckets
docker exec garage /garage bucket info my-bucket # Show bucket details
docker exec garage /garage key list # List API keys
docker exec garage /garage status # Cluster node status
Key Commands
docker compose -f /opt/garage/docker-compose.yml up -d # Start
docker compose -f /opt/garage/docker-compose.yml down # Stop
docker compose -f /opt/garage/docker-compose.yml logs -f # View logs
Data Persistence
Garage stores bucket metadata in the garage_meta volume and object data in the garage_data volume. Your data persists across container restarts.