Wiki source code of Configuring a VPS machine from scratch
Last modified by Alexandru Pentilescu on 2024/07/22 21:37
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | (% class="jumbotron" %) | ||
2 | ((( | ||
3 | (% class="container" %) | ||
4 | ((( | ||
5 | = Starting out = | ||
6 | When starting with a clean linux VPS, the first thing you must ensure is that you have access to the root account (either that or are part of the sudoers file in such a way that you can make administrative changes to your system). If you don't have these access rights on the target system, you must first contact the administrator of this system and ask them for such priviliges. Otherwise, this article will be of no use to you. | ||
7 | |||
8 | ))) | ||
9 | ))) | ||
10 | |||
11 | (% class="col-xs-12 col-sm-4" %) | ||
12 | ((( | ||
13 | {{box title="**Contents**"}}{{toc /}}{{/box}} | ||
14 | ))) | ||
15 | |||
16 | (% class="row" %) | ||
17 | ((( | ||
18 | (% class="col-xs-12 col-sm-8" %) | ||
19 | ((( | ||
20 | = Creating a new user = | ||
21 | |||
22 | If you don't have an active user just yet, please do the following: | ||
23 | |||
24 | {{code language="bash"}} | ||
25 | sudo adduser <newuser> | ||
26 | {{/code}} | ||
27 | |||
28 | Please replace <newuser> with your username. | ||
29 | Then, to make them part of the sudoers file, just do: | ||
30 | |||
31 | {{code language="bash"}} | ||
32 | sudo usermod -aG sudo <newuser> | ||
33 | {{/code}} | ||
34 | |||
35 | == Using root vs. using a sudoers user == | ||
36 | |||
37 | There's an argument to be made between just logging in as root on the system directly or using a separate users which is part of the sudoers file. | ||
38 | Both are viable options for administrators to make configuration changes on the target system. | ||
39 | Some people argue that adding other users to a system is just adding complexity and, especially if you plan to be the sole administrator of the VPS, it makes little sense not to use the already built-in root user. After all, root starts out with all the priviliges and, most importantly, you'll never have to remember to keep invoking "sudo" whenever you need to run administrative commands in the terminal. Because of this, this is seen as the superior approach. | ||
40 | |||
41 | However, others make the argument that constantly running as root is a security risk for the system as a whole. When running as root, any terminal command issued will also run with absolute privileges, which runs the risk of amplifying any human error to disastruous proportions. | ||
42 | Granted, this risk also exists for sudoers users too, but only with the commands which are ran with "sudo". Having the extra step of manually prefixing each command with "sudo" is seen as a preventative measure of avoiding system damage. | ||
43 | Moreover, if the administrator is diligent, they may retroactively change the sudoers users' permissions manually to only allow privilege escalation for commands which they deem safe to run, effectively blocking any risk for overly dangerous commands such as "dd" or "rm -Rf". | ||
44 | |||
45 | Ultimately, the argument for or against using root access is mostly a philosophical one, rather than a technical one. There is no right or wrong answer to this question. Rather, each answer brings its own advantages and disadvantages to the table. What really matters is what you're more comfortable with using in the end. | ||
46 | Moreover, using a sudoers user rather than root is not inherently a guarantee for system safety either and should not be taken as a leeway for running suspicious executable files from the internet either, as privilege escalation bugs have existed in the Linux kernel since its own inception. | ||
47 | |||
48 | = Install the necessary utilities = | ||
49 | Install docker, postfix and nginx, which are all utilities you will be using consistently, from this point on. | ||
50 | |||
51 | Afterwards, install the certbot-plugin-gandi plugin to enable automatic certificate renewals using gandi. This may require you to install pip3 as well, first. | ||
52 | |||
53 | One the plugin is installed, just do: | ||
54 | |||
55 | {{code language="bash"}} | ||
56 | certbot certonly --authenticator dns-gandi --dns-gandi-credentials /etc/letsencrypt/gandi/gandi.ini -n -d 'transistor.one,*.transistor.one' --agree-tos --email=alexandru.pentilescu@disroot.org | ||
57 | {{/code}} | ||
58 | |||
59 | The contents of /etc/lets/encrypt/gandi/gandi.ini should look like the following: | ||
60 | # live dns v5 api key | ||
61 | dns_gandi_api_key=<gandi_api_key> | ||
62 | |||
63 | The <gandi_api_key> token should be replaced with the actual API key generated from the Gandi website for your account. | ||
64 | |||
65 | = Setting up an SMTP server = | ||
66 | This will be required for all the future things you will be doing on the server. | ||
67 | |||
68 | Moreover, this is not an easy task and will be a little time consuming. Please consult the documentation [[here>>https://wiki.transistor.one/bin/view/Guides/How%20to%20setup%20a%20postfix%20SMTP%20server/]] | ||
69 | |||
70 | = Setting up automatic updates = | ||
71 | |||
72 | System updates are a necessity for modern day operating systems and on linux, especially, there's a constant need to run such updates regurarly to avoid the risk of running vulnerable software that can be exploited by rogue malware. | ||
73 | |||
74 | Linux, in particular, is an attractive target for malware writers in recent years, due to the fact that corporate servers owned by renowned companies are seen as a more profitable compromise target for malicious actors who may wish to extort money from unpatched systems. | ||
75 | |||
76 | After all, large corporations running unpatched systems are more likely to pay significant amounts of money than lone private users if their systems were to be compromised. | ||
77 | |||
78 | As such, it's imperative to protect our systems from such damage by preventing the attacks in the first place. The first step in achieving this goal is by constantly patching the system. And a good way to do this without requiring manual intervention is setting up automatic updates. | ||
79 | |||
80 | Debian based systems have an official package known as unattended-upgrades which can do just this. To install this package, please run the following command: | ||
81 | |||
82 | {{code language="bash"}} | ||
83 | sudo apt-get install unattended-upgrades | ||
84 | {{/code}} | ||
85 | |||
86 | This will install the package from the official repositories. After this, the package should be configurable to the administrator by editing the /etc/apt/apt.conf.d/50unattended-upgrades file. This file should be generated automatically after installing the package. | ||
87 | |||
88 | There's a lot of stuff which can be configured in this file. Some of the options which I personally prefer to activate by uncommenting are the following: | ||
89 | |||
90 | {{code language="none"}} | ||
91 | |||
92 | // This option controls whether the development release of Ubuntu will be | ||
93 | // upgraded automatically. Valid values are "true", "false", and "auto". | ||
94 | Unattended-Upgrade::DevRelease "auto"; | ||
95 | |||
96 | // Send email to this address for problems or packages upgrades | ||
97 | // If empty or unset then no email is sent, make sure that you | ||
98 | // have a working mail setup on your system. A package that provides | ||
99 | // 'mailx' must be installed. E.g. "user@example.com" | ||
100 | Unattended-Upgrade::Mail "Alexandru.Pentilescu@disroot.org"; | ||
101 | {{/code}} | ||
102 | |||
103 | This lets the system know that I want for email notifications with respect to updates need to be delivered to that specific email address. This is important because, every time updates occur, this lets me know via email. Of course, you need to have an SMTP server running locally, as described in the previous step. | ||
104 | |||
105 | Then: | ||
106 | |||
107 | {{code language="none"}} | ||
108 | // Remove unused automatically installed kernel-related packages | ||
109 | // (kernel images, kernel headers and kernel version locked tools). | ||
110 | Unattended-Upgrade::Remove-Unused-Kernel-Packages "true"; | ||
111 | |||
112 | // Do automatic removal of newly unused dependencies after the upgrade | ||
113 | Unattended-Upgrade::Remove-New-Unused-Dependencies "true"; | ||
114 | {{/code}} | ||
115 | |||
116 | These configuration options will instruct the package to remove obsolete files which become stale as updates come in. | ||
117 | |||
118 | {{code language="none"}} | ||
119 | // Automatically reboot *WITHOUT CONFIRMATION* if | ||
120 | // the file /var/run/reboot-required is found after the upgrade | ||
121 | Unattended-Upgrade::Automatic-Reboot "true"; | ||
122 | {{/code}} | ||
123 | |||
124 | This will instruct the package to automatically reboot the system. This is necessary after specific kernel updates are installed that need to be installed in memory and replace the old ones. | ||
125 | |||
126 | Finally: | ||
127 | |||
128 | {{code language="none"}} | ||
129 | // If automatic reboot is enabled and needed, reboot at the specific | ||
130 | // time instead of immediately | ||
131 | // Default: "now" | ||
132 | Unattended-Upgrade::Automatic-Reboot-Time "02:00"; | ||
133 | {{/code}} | ||
134 | |||
135 | This instructs the package to reboot the whole system, automatically, whenever an update requires it, the next time the system clock reaches this specific configured time. I set mine to reboot the system, whenever an update requires it, at 2AM. You may change the time to whichever fits your needs. | ||
136 | |||
137 | = Installing docker = | ||
138 | |||
139 | Docker is almost an irreplaceable piece of software that will be critical to your whole infrastructure. Docker needs to be installed on the system properly. In order to do so, please follow the guide [[here>>https://docs.docker.com/engine/install/ubuntu/]] | ||
140 | |||
141 | = Force postfix to bind to non-local IP addresses on start = | ||
142 | |||
143 | If we plan on using our SMPT server to relay emails coming from our docker containers, we will have to force postfix to bind to an IP address that's different from localhost. This needs to be done because, if we configure postfix to only bind to localhost, it will effectively be unreachable to our docker containers and they will not be able to use it as a relay. | ||
144 | In order to allow for postfix to bind to non-local addresses, we have to add the following configuration file /etc/sysctl.d/80-network.conf with the following contents: | ||
145 | |||
146 | {{code language="ini"}} | ||
147 | net.ipv4.ip_nonlocal_bind = 1 | ||
148 | net.ipv6.ip_nonlocal_bind = 1 | ||
149 | {{/code}} | ||
150 | |||
151 | Honestly, the "ipv6" line is unnecessary for our purposes, but I'm adding it anyway. After this file is added, after reboot, postfix will be able to bind itself to nonlocal addresses successfully. | ||
152 | |||
153 | = Installing Grafana and all the other necessary components = | ||
154 | System monitoring is genuinely important. As such, having some pretty graphs to look at that monitor various stats of the server can be quite useful. | ||
155 | To this end, we will set up Grafana as our graphs dashboard where we will visualize all of the relevant metrics of the system, as well as Prometheus as the data aggregator and Node Exporter, as the data collector. | ||
156 | |||
157 | Let's get started! | ||
158 | |||
159 | == Installing node exporter == | ||
160 | Use wget or any other utility to grab the latest version of node exporter. | ||
161 | |||
162 | {{code language="bash"}} | ||
163 | wget https://github.com/prometheus/node_exporter/releases/download/v0.15.2/node_exporter-0.15.2.linux-amd64.tar.gz | ||
164 | {{/code}} | ||
165 | |||
166 | Once this is done, extract the contents of the archive: | ||
167 | |||
168 | {{code language="bash"}} | ||
169 | tar -xf node_exporter-0.15.2.linux-amd64.tar.gz | ||
170 | {{/code}} | ||
171 | |||
172 | We will be running this as its own user. In order to avoid having to create a home directory for that user, it's best if we move the utilities that just got extracted to the root directories: | ||
173 | |||
174 | {{code language="bash"}} | ||
175 | mv node_exporter-0.15.2.linux-amd64/node_exporter /usr/local/bin | ||
176 | {{/code}} | ||
177 | |||
178 | Create the new user: | ||
179 | |||
180 | {{code language="bash"}} | ||
181 | useradd -rs /bin/false node_exporter | ||
182 | {{/code}} | ||
183 | |||
184 | Create a new systemd service file, that will start the node_exporter automatically, after each boot: | ||
185 | |||
186 | {{code language="systemd"}} | ||
187 | [Unit] | ||
188 | Description=Node Exporter | ||
189 | After=network.target | ||
190 | |||
191 | [Service] | ||
192 | User=node_exporter | ||
193 | Group=node_exporter | ||
194 | Type=simple | ||
195 | ExecStart=/usr/local/bin/node_exporter | ||
196 | |||
197 | [Install] | ||
198 | WantedBy=multi-user.target | ||
199 | {{/code}} | ||
200 | |||
201 | Once this has been done, reload the service files, enable the newly created service and start it: | ||
202 | |||
203 | {{code language="bash"}} | ||
204 | systemctl daemon-reload | ||
205 | systemctl enable node_exporter | ||
206 | systemctl start node_exporter | ||
207 | {{/code}} | ||
208 | |||
209 | == Installing Prometheus == | ||
210 | Prometheus will be aggregating all the data that is collected by the node_exporter and allowing for it to be queried with a standardized syntax. | ||
211 | |||
212 | To install Prometheus, we must first download it: | ||
213 | |||
214 | {{code language="bash"}} | ||
215 | wget https://github.com/prometheus/prometheus/releases/download/v2.1.0/prometheus-2.1.0.linux-amd64.tar.gz | ||
216 | tar -xf prometheus-2.1.0.linux-amd64.tar.gz | ||
217 | {{/code}} | ||
218 | |||
219 | Much like with node_exporer above, we will force Prometheus to run as its own user, for security reasons. As such, we should isolate its files to the root filesystem: | ||
220 | |||
221 | {{code language="bash"}} | ||
222 | mv prometheus-2.1.0.linux-amd64/prometheus prometheus-2.1.0.linux-amd64/promtool /usr/local/bin | ||
223 | {{/code}} | ||
224 | |||
225 | We should also create new directories to store the relevant data for Prometheus: | ||
226 | |||
227 | {{code language="bash"}} | ||
228 | mkdir /etc/prometheus /var/lib/prometheus | ||
229 | {{/code}} | ||
230 | |||
231 | Then move the current directories to the appropriate system-level locations: | ||
232 | |||
233 | {{code language="bash"}} | ||
234 | mv prometheus-2.1.0.linux-amd64/consoles prometheus-2.1.0.linux-amd64/console_libraries /etc/prometheus | ||
235 | {{/code}} | ||
236 | |||
237 | == Configuring Prometheus == | ||
238 | Create a new /etc/prometheus/prometheus.yml with the following contents: | ||
239 | |||
240 | {{code language="yml"}} | ||
241 | global: | ||
242 | scrape_interval: 10s | ||
243 | scrape_configs: | ||
244 | - job_name: 'node' | ||
245 | static_configs: | ||
246 | - targets: ['localhost:9100'] | ||
247 | {{/code}} | ||
248 | |||
249 | Once the above is done, we should create the new prometheus user: | ||
250 | |||
251 | {{code language="bash"}} | ||
252 | useradd -rs /bin/false prometheussudo chown -R prometheus: /etc/prometheus /var/lib/prometheus | ||
253 | chown -R prometheus: /etc/prometheus /var/lib/prometheus | ||
254 | {{/code}} | ||
255 | |||
256 | Then, please create an /etc/systemd/system/prometheus.service file with the following contents: | ||
257 | |||
258 | {{code language="systemd"}} | ||
259 | [Unit] | ||
260 | Description=Prometheus | ||
261 | After=network.target | ||
262 | |||
263 | [Service] | ||
264 | User=prometheus | ||
265 | Group=prometheus | ||
266 | Type=simple | ||
267 | ExecStart=/usr/local/bin/prometheus \ | ||
268 | --config.file /etc/prometheus/prometheus.yml \ | ||
269 | --storage.tsdb.path /var/lib/prometheus/ \ | ||
270 | --web.console.templates=/etc/prometheus/consoles \ | ||
271 | --web.console.libraries=/etc/prometheus/console_libraries | ||
272 | |||
273 | [Install] | ||
274 | WantedBy=multi-user.target | ||
275 | {{/code}} | ||
276 | |||
277 | Then reload: | ||
278 | |||
279 | {{code language="bash"}} | ||
280 | systemctl daemon-reload | ||
281 | systemctl enable prometheus | ||
282 | systemctl start prometheus | ||
283 | {{/code}} | ||
284 | |||
285 | After everything has been done, we can proceed with Grafana itself. | ||
286 | |||
287 | == Installing Grafana == | ||
288 | Ideally Grafana should be installed from its own APT repository, as this will keep it updated constantly. To do so: | ||
289 | |||
290 | {{code language="bash"}} | ||
291 | apt-get install -y apt-transport-https software-properties-common wget | ||
292 | mkdir -p /etc/apt/keyrings/ | ||
293 | wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | tee /etc/apt/keyrings/grafana.gpg > /dev/null | ||
294 | echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | tee -a /etc/apt/sources.list.d/grafana.list | ||
295 | apt-get update | ||
296 | apt-get install grafana | ||
297 | {{/code}} | ||
298 | |||
299 | == Configuring Grafana == | ||
300 | After Grafana has been installed, we should make sure that it works properly by changing its default port from port 3000 to 4000 (as port 3000 is normally used by Gitea on our instance). | ||
301 | |||
302 | To do so, please edit /etc/grafana/grafana.ini by uncommenting the following line and changing the port number to: | ||
303 | |||
304 | {{code language="ini"}} | ||
305 | http_port = 4000 | ||
306 | {{/code}} | ||
307 | |||
308 | Once this is done, enable the grafana-server service and start it: | ||
309 | |||
310 | {{code language="bash"}} | ||
311 | systemctl daemon-reload && systemctl enable grafana-server && systemctl start grafana-server.service | ||
312 | {{/code}} | ||
313 | |||
314 | == Expose the newly created port as an nginx subdomain == | ||
315 | |||
316 | Finally, configure an nginx service file for it. Create an /etc/nginx/sites-available/grafana.conf file with the following contents: | ||
317 | |||
318 | {{code language="nxinx"}} | ||
319 | server { | ||
320 | server_name stats.transistor.one; | ||
321 | |||
322 | listen [::]:443 ssl http2; # managed by Certbot | ||
323 | listen 443 ssl http2; # managed by Certbot | ||
324 | |||
325 | include /etc/nginx/snippets/ssl.conf; | ||
326 | |||
327 | location / { | ||
328 | proxy_set_header Host $http_host; | ||
329 | proxy_pass http://localhost:4000; | ||
330 | } | ||
331 | } | ||
332 | {{/code}} | ||
333 | |||
334 | Apparently the proxy_set_header directive is necessary to avoid some weird error when trying to set a new password. | ||
335 | |||
336 | == Set up the environment == | ||
337 | From this point on, all that's left is to login to stats.transistor.one and set up a new account. The default credentials to do so are username: admin and password: admin. You should change those immediately so that they will not get abused. | ||
338 | |||
339 | Once that's done, configure your own dashboard and make it work. Personally, I like to import a public dashboard called Node Exporter Full, which looks very cool. | ||
340 | |||
341 | Happy coding! | ||
342 | ))) |