Configuring a VPS machine from scratch
Starting out
When starting with a clean linux VPS, the first thing you must ensure is that you have access to the root account (either that or are part of the sudoers file in such a way that you can make administrative changes to your system). If you don't have these access rights on the target system, you must first contact the administrator of this system and ask them for such priviliges. Otherwise, this article will be of no use to you.
Creating a new user
If you don't have an active user just yet, please do the following:
Please replace <newuser> with your username.
Then, to make them part of the sudoers file, just do:
Using root vs. using a sudoers user
There's an argument to be made between just logging in as root on the system directly or using a separate users which is part of the sudoers file.
Both are viable options for administrators to make configuration changes on the target system.
Some people argue that adding other users to a system is just adding complexity and, especially if you plan to be the sole administrator of the VPS, it makes little sense not to use the already built-in root user. After all, root starts out with all the priviliges and, most importantly, you'll never have to remember to keep invoking "sudo" whenever you need to run administrative commands in the terminal. Because of this, this is seen as the superior approach.
However, others make the argument that constantly running as root is a security risk for the system as a whole. When running as root, any terminal command issued will also run with absolute privileges, which runs the risk of amplifying any human error to disastruous proportions.
Granted, this risk also exists for sudoers users too, but only with the commands which are ran with "sudo". Having the extra step of manually prefixing each command with "sudo" is seen as a preventative measure of avoiding system damage.
Moreover, if the administrator is diligent, they may retroactively change the sudoers users' permissions manually to only allow privilege escalation for commands which they deem safe to run, effectively blocking any risk for overly dangerous commands such as "dd" or "rm -Rf".
Ultimately, the argument for or against using root access is mostly a philosophical one, rather than a technical one. There is no right or wrong answer to this question. Rather, each answer brings its own advantages and disadvantages to the table. What really matters is what you're more comfortable with using in the end.
Moreover, using a sudoers user rather than root is not inherently a guarantee for system safety either and should not be taken as a leeway for running suspicious executable files from the internet either, as privilege escalation bugs have existed in the Linux kernel since its own inception.
Install the necessary utilities
Install docker, postfix and nginx, which are all utilities you will be using consistently, from this point on.
Afterwards, install the certbot-plugin-gandi plugin to enable automatic certificate renewals using gandi. This may require you to install pip3 as well, first.
One the plugin is installed, just do:
The contents of /etc/lets/encrypt/gandi/gandi.ini should look like the following:
# live dns v5 api key
dns_gandi_api_key=<gandi_api_key>
The <gandi_api_key> token should be replaced with the actual API key generated from the Gandi website for your account.
Setting up an SMTP server
This will be required for all the future things you will be doing on the server.
Moreover, this is not an easy task and will be a little time consuming. Please consult the documentation here
Setting up automatic updates
System updates are a necessity for modern day operating systems and on linux, especially, there's a constant need to run such updates regurarly to avoid the risk of running vulnerable software that can be exploited by rogue malware.
Linux, in particular, is an attractive target for malware writers in recent years, due to the fact that corporate servers owned by renowned companies are seen as a more profitable compromise target for malicious actors who may wish to extort money from unpatched systems.
After all, large corporations running unpatched systems are more likely to pay significant amounts of money than lone private users if their systems were to be compromised.
As such, it's imperative to protect our systems from such damage by preventing the attacks in the first place. The first step in achieving this goal is by constantly patching the system. And a good way to do this without requiring manual intervention is setting up automatic updates.
Debian based systems have an official package known as unattended-upgrades which can do just this. To install this package, please run the following command:
This will install the package from the official repositories. After this, the package should be configurable to the administrator by editing the /etc/apt/apt.conf.d/50unattended-upgrades file. This file should be generated automatically after installing the package.
There's a lot of stuff which can be configured in this file. Some of the options which I personally prefer to activate by uncommenting are the following:
// This option controls whether the development release of Ubuntu will be
// upgraded automatically. Valid values are "true", "false", and "auto".
Unattended-Upgrade::DevRelease "auto";
// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed. E.g. "user@example.com"
Unattended-Upgrade::Mail "Alexandru.Pentilescu@disroot.org";
This lets the system know that I want for email notifications with respect to updates need to be delivered to that specific email address. This is important because, every time updates occur, this lets me know via email. Of course, you need to have an SMTP server running locally, as described in the previous step.
Then:
// (kernel images, kernel headers and kernel version locked tools).
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
// Do automatic removal of newly unused dependencies after the upgrade
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
These configuration options will instruct the package to remove obsolete files which become stale as updates come in.
// the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "true";
This will instruct the package to automatically reboot the system. This is necessary after specific kernel updates are installed that need to be installed in memory and replace the old ones.
Finally:
// time instead of immediately
// Default: "now"
Unattended-Upgrade::Automatic-Reboot-Time "02:00";
This instructs the package to reboot the whole system, automatically, whenever an update requires it, the next time the system clock reaches this specific configured time. I set mine to reboot the system, whenever an update requires it, at 2AM. You may change the time to whichever fits your needs.
Installing docker
Docker is almost an irreplaceable piece of software that will be critical to your whole infrastructure. Docker needs to be installed on the system properly. In order to do so, please follow the guide here
Force postfix to bind to non-local IP addresses on start
If we plan on using our SMPT server to relay emails coming from our docker containers, we will have to force postfix to bind to an IP address that's different from localhost. This needs to be done because, if we configure postfix to only bind to localhost, it will effectively be unreachable to our docker containers and they will not be able to use it as a relay.
In order to allow for postfix to bind to non-local addresses, we have to add the following configuration file /etc/sysctl.d/80-network.conf with the following contents:
net.ipv6.ip_nonlocal_bind = 1
Honestly, the "ipv6" line is unnecessary for our purposes, but I'm adding it anyway. After this file is added, after reboot, postfix will be able to bind itself to nonlocal addresses successfully.
Installing Grafana and all the other necessary components
System monitoring is genuinely important. As such, having some pretty graphs to look at that monitor various stats of the server can be quite useful.
To this end, we will set up Grafana as our graphs dashboard where we will visualize all of the relevant metrics of the system, as well as Prometheus as the data aggregator and Node Exporter, as the data collector.
Let's get started!
Installing node exporter
Use wget or any other utility to grab the latest version of node exporter.
Once this is done, extract the contents of the archive:
We will be running this as its own user. In order to avoid having to create a home directory for that user, it's best if we move the utilities that just got extracted to the root directories:
Create the new user:
Create a new systemd service file, that will start the node_exporter automatically, after each boot:
Description=Node Exporter
After=network.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
Once this has been done, reload the service files, enable the newly created service and start it:
systemctl enable node_exporter
systemctl start node_exporter
Installing Prometheus
Prometheus will be aggregating all the data that is collected by the node_exporter and allowing for it to be queried with a standardized syntax.
To install Prometheus, we must first download it:
tar -xf prometheus-2.1.0.linux-amd64.tar.gz
Much like with node_exporer above, we will force Prometheus to run as its own user, for security reasons. As such, we should isolate its files to the root filesystem:
We should also create new directories to store the relevant data for Prometheus:
Then move the current directories to the appropriate system-level locations:
Configuring Prometheus
Create a new /etc/prometheus/prometheus.yml with the following contents:
scrape_interval: 10s
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
Once the above is done, we should create the new prometheus user:
chown -R prometheus: /etc/prometheus /var/lib/prometheus
Then, please create an /etc/systemd/system/prometheus.service file with the following contents:
Description=Prometheus
After=network.target
[Service]
User=prometheus
Group=prometheus
Type=simple
ExecStart=/usr/local/bin/prometheus \
--config.file /etc/prometheus/prometheus.yml \
--storage.tsdb.path /var/lib/prometheus/ \
--web.console.templates=/etc/prometheus/consoles \
--web.console.libraries=/etc/prometheus/console_libraries
[Install]
WantedBy=multi-user.target
Then reload:
systemctl enable prometheus
systemctl start prometheus
After everything has been done, we can proceed with Grafana itself.
Installing Grafana
Ideally Grafana should be installed from its own APT repository, as this will keep it updated constantly. To do so:
mkdir -p /etc/apt/keyrings/
wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | tee /etc/apt/keyrings/grafana.gpg > /dev/null
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | tee -a /etc/apt/sources.list.d/grafana.list
apt-get update
apt-get install grafana
Configuring Grafana
After Grafana has been installed, we should make sure that it works properly by changing its default port from port 3000 to 4000 (as port 3000 is normally used by Gitea on our instance).
To do so, please edit /etc/grafana/grafana.ini by uncommenting the following line and changing the port number to:
Once this is done, enable the grafana-server service and start it:
Expose the newly created port as an nginx subdomain
Finally, configure an nginx service file for it. Create an /etc/nginx/sites-available/grafana.conf file with the following contents:
server_name stats.transistor.one;
listen [::]:443 ssl http2; # managed by Certbot
listen 443 ssl http2; # managed by Certbot
include /etc/nginx/snippets/ssl.conf;
location / {
proxy_set_header Host $http_host;
proxy_pass http://localhost:4000;
}
}
Apparently the proxy_set_header directive is necessary to avoid some weird error when trying to set a new password.
Set up the environment
From this point on, all that's left is to login to stats.transistor.one and set up a new account. The default credentials to do so are username: admin and password: admin. You should change those immediately so that they will not get abused.
Once that's done, configure your own dashboard and make it work. Personally, I like to import a public dashboard called Node Exporter Full, which looks very cool.
Happy coding!