Set up a new Virtual Private Server (VPS)
Setting up a new Virtual Private Server
Recently, I set up my first virtual private server (VPS) for hosting my websites and applications. Chances are quite high, the webpage you are looking at right now is served by this server. After some research, I decided to go with Strato as my hosting provider. My friend Codie suggested them to me and they have great prices and all the options I would need. The setup process turned out to be quite straightforward, but I wanted to document the steps I took to make it easier for future reference and maybe help others who are embarking on a similar task.
This guide covers the complete setup from initial server provisioning to having a fully functional development environment with SSH access, user management, and basic web server configuration. Let's dive into the details!
Choosing your VPS (Hardware)
Depending on your use case you want to carefully select the hardware for your VPS. If can either choose to set up a VPS which has more than enough power for your services and pay more or you could pay less but have the risk of running into hardware bottlenecks.
Personally I went for a STRATO VPS Linux VC4-8, which has 4 cores, 8GB of RAM and 240 GB of storage. I pay 6 EUR/month for it and I think that's quite the steal. Hardware-wise it can handle small to medium tasks. It's definitely an overkill for just hosting a private website but you could run other services on it. Like I run my analytics tool called umami on there (blogpost: here). I would refrain from e-mail hosting though. Tried it, it sucks.
Initial Server Setup and SSH Configuration
The first step in setting up any VPS is getting secure SSH access. I started by generating the necessary SSH keys using PuTTYgen to get the public key in the right format for the initial server installation. This was necessary as my ssh key was in the wrong format and the Strato Setup process did not recognize it at first.
- Use PuTTYgen to generate the public key (in the correct format) for initial server install
- Let the server complete its installation process
- Login with PuTTY using the IP address, port 22, and SSH authentication credentials with the private key file
Once I had initial access, the next crucial step was creating a new user account. Running as root is never a good practice, so I set up a dedicated user for my daily operations.
Creating a New User and Setting Up SSH Access
Here's the sequence of commands I used to create and configure the new user:
123456
useradd -m -g users <username>
adduser <username> sudo
# If you need to change the group later:
usermod -a -G groupName userName
# Copy over the SSH authorized key files:
cp ~/.ssh/authorized_keys /home/<username>/.ssh/authorized_keys
Securing SSH Access
Security is paramount when setting up a server. One of the first things I did was change the default SSH port to reduce automated attack attempts.
1234
# Modify the port in /etc/ssh/sshd_config
sudo systemctl daemon-reload
sudo systemctl restart ssh.socket
sudo systemctl restart ssh.service
On the client side, I edited the SSH config to have a more convenient connection setup. This makes it much easier to connect to the server without remembering IP addresses and ports every time.
1234
Host NAME
HostName SERVER_IP_ADDRESS
User USER
Port NEW_CUSTOM_PORT
This simple change can significantly reduce the noise from automated bots trying to brute force their way into your server.
Securing Access with UFW (Firewall)
UFW (Uncomplicated Firewall) is a nice way for managing iptables firewall rules on Linux systems, designed to simplify the process of configuring a secure network. So I block all by default and just allow the ports I want to allow. This would be the port 443 for ssl, port 80 for certbot and also my custom ssh port obviously (be careful not to lock you out of ssh). A custom way of ufw settings could look like this (custom ssh port: 2222):
123456
sudo ufw allow 2222/tcp
sudo ufw default deny incoming
sudo ufw default allow outgoing #this usually is considered safe
sudo ufw allow 443/tcp
sudo ufw allow 80/tcp
sudo ufw enable
To review your rules, you can use:
1
sudo ufw status numbered
Setting Up the Development Environment
With secure access established, it was time to set up a proper development environment. I went with my preferred tools and configurations:
12345
sudo apt-get update
sudo apt-get install zsh
sudo apt-get install git
curl -L https://github.com/astral-sh/uv/releases/download/0.1.21/uv-installer.sh | sh
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
This installs git, ohmyzsh and uv (my favorite package manager for python since poetry).
For Node.js development, I installed the latest stable version using the Node Version Manager (n):
12345
sudo apt-get install nodejs
sudo apt-get install npm
sudo npm install n -g
sudo n stable
npm install -g npx
I also installed the GitHub CLI for easier repository management and nginx for web serving capabilities:
123
# Install GitHub CLI (look up the specific command for your system)
sudo apt-get install nginx
sudo apt-get install python3-certbot-nginx
Setting Up Services for Your Website
Next, we will explore the services used for serving you website. I chose to go with next.js which is a react framework by vercel. This makes me sleep easy at night as I know that if the demand for my frontend grows too much, I can deploy easily to vercel and be done with it. For this website with almost only static content I think it is not a problem to worry about right now.
So we will dive into Nginx, Certbot, systemd and a bit of shell scripting to launch your service in production or development mode.
Nginx, Certbot, and Next.js Configuration
Understanding Nginx and Its Role
Nginx (pronounced 'engine-x') is a powerful web server and reverse proxy that acts as the front door to your web applications. Think of it as a traffic controller that sits between your users and your actual application server. Here's what nginx does for your setup:
- Reverse Proxy: Routes incoming requests to your Next.js application running on localhost
- SSL Termination: Handles HTTPS encryption/decryption so your app doesn't have to
- Load Balancing: Can distribute traffic across multiple application instances
- Static File Serving: Serves static assets (images, CSS, JS) directly without hitting your app
- Security: Adds security headers and protects against common web attacks
- Caching: Can cache responses to improve performance
- Compression: Compresses responses to reduce bandwidth usage
In my setup, nginx receives all incoming HTTP/HTTPS requests and forwards them to my Next.js application running on a custom port (I like to keep 3000 clean). This separation of concerns is a best practice - nginx handles web server duties while your application focuses on business logic.
Nginx Configuration Breakdown
Here's the nginx configuration I use for my website. I'll explain each section and provide a template you can adapt for your own domain:
1234567891011121314151617181920212223242526272829303132333435363738394041424344
server {
server_name YOUR_DOMAIN.com;
location / {
#rewrite ^/(.*)/$ /$1 permanent;
proxy_pass http://localhost:CUSTOM_APP_PORT;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Upload limits
client_max_body_size 100M;
}
# Configure access and error logs
access_log /var/log/nginx/YOUR_DOMAIN-fe.log;
error_log /var/log/nginx/YOUR_DOMAIN-fe-error.log;
# Security related headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN.com-0001/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN.com-0001/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = YOUR_DOMAIN.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name YOUR_DOMAIN.com;
listen 80;
return 404; # managed by Certbot
}
Let me break down the key components of this configuration:
Server Block and Domain
The `server_name` directive tells nginx which domain this configuration applies to. Replace `YOUR_DOMAIN.com` with your actual domain name.
The second server block at the bottom of the fileredirects all HTTP traffic to HTTPS for security. Only allow for HTTPS traffic to reach nginx.
Proxy Configuration
The `location /` block handles all incoming requests and forwards them to your application:
- `proxy_pass http://localhost:CUSTOM_APP_PORT` - Forwards requests to your app (replace CUSTOM_APP_PORT with your app's port, e.g., 3001)
- `proxy_set_header` directives - Pass important headers to your application
- `client_max_body_size 100M` - Allows file uploads up to 100MB
Security Headers
The security headers protect against common web vulnerabilities:
- `X-Frame-Options` - Prevents clickjacking attacks
- `X-Content-Type-Options` - Prevents MIME type sniffing
- `X-XSS-Protection` - Enables browser's XSS protection
- `Content-Security-Policy` - Controls which resources can be loaded
- `Strict-Transport-Security` - Forces HTTPS connections
SSL/HTTPS Configuration
I really love the ease of use of certbot. You just run it with the nginx flag enabled and you are good to go. Remember though, that you should check your renewal setup as the certificates once expired pretty much make your service unavailable. It cannot be reached anymore.
Systemd Service Configuration
Systemd is the service manager in modern Linux distributions that handles starting, stopping, and managing system services. For your web application, you'll want to create a systemd service that automatically starts your application on boot and restarts it if it crashes.
Here's the systemd service configuration I use for my Next.js application:
12345678910111213141516
[Unit]
Description=Your App Name Frontend
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=YOUR_USERNAME
Group=users
WorkingDirectory=/home/YOUR_USERNAME/YOUR_PROJECT_PATH/
ExecStart=/home/YOUR_USERNAME/YOUR_PROJECT_PATH/server/start.sh prod
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Application Startup Script
The systemd service calls a startup script that handles the application lifecycle. This script manages dependencies, builds the application, and starts it in the correct mode. Here's the `start.sh` script I use:
1234567891011121314151617181920212223242526272829303132333435363738394041
#!/bin/bash
MODE=$1
if [[ -z "${MODE}" ]]; then
echo "Usage: $0 <dev|prod>"
exit 1
fi
export NVM_DIR="${HOME}/.nvm"
[[ -s "${NVM_DIR}/nvm.sh" ]] && \. "${NVM_DIR}/nvm.sh" # This loads nvm
[[ -s "${NVM_DIR}/bash_completion" ]] && \. "${NVM_DIR}/bash_completion" # This loads nvm bash_completion
case "${MODE}" in
"dev")
npm install
export PORT=DEV_PORT
npm run dev
;;
"prod")
echo "Installing dependencies..."
npm install
echo "Building application..."
npm run build
if [ ! -d ".next" ]; then
echo "Error: Build directory '.next' not found. Build may have failed."
exit 1
fi
echo "Starting production server on port PROD_PORT..."
export PORT=PROD_PORT
npm run start
;;
*)
echo "Invalid mode: ${MODE}"
echo "Usage: $0 <dev|prod>"
exit 1
;;
esac
This script provides several important functions:
Mode Selection
The script accepts a mode parameter (`dev` or `prod`) to determine how to run the application:
- Development mode: Installs dependencies and starts the development server with hot reloading
- Production mode: Installs dependencies, builds the application, and starts the production server
Node Version Manager (NVM) Integration
The script loads NVM to ensure the correct Node.js version is used, which is crucial for consistent application behavior.
Build Verification
In production mode, the script verifies that the build was successful by checking for the `.next` directory before starting the server.
Installing and Managing the Service
To set up your systemd service:
1234567891011121314151617
# Copy your service file to systemd directory
sudo cp your-app.service /etc/systemd/system/
# Reload systemd to recognize the new service
sudo systemctl daemon-reload
# Enable the service to start on boot
sudo systemctl enable your-app.service
# Start the service
sudo systemctl start your-app.service
# Check the status
sudo systemctl status your-app.service
# View logs
sudo journalctl -u your-app.service -f
Make sure your `start.sh` script is executable:
1
chmod +x /path/to/your/start.sh
Final Thoughts
Setting up a VPS from scratch can seem a little bit much at first, but with the right approach and tools, it becomes quite manageable.