Learning from Baremetal

Picture of Matt Tolman
Written by
Published: Apr. 29, 2025
Estimated reading time: 10 min read

Lately I've been working with baremetal servers. I have two servers in a rack for personal use. They aren't big or fancy, and I got them used. But, they do work. As part of this, I've learned a lot about servers and server management, so here's where I'm at.

It's Not As Hard As Cloud Providers Say

Listening to the AWS/Azure/GCP rhetoric, it can make running baremetal servers sound difficult or time consuming, and that you need to have the "professionals" do it (at a premium of course).

The truth is, it's not hard, it's different. It's not any harder than programming, but it does require investing time to learn. There are going to be gotchas, pitfalls, mistakes, and learnings, but the same is true of programming as well

A lot of the basics can be googled, like what RAID is, how to setup an antivirus program, etc. And, not everything has to be fully automated. Sure, it'd be nice for me to have automated backups, but I'm not uploading files regularly (just a few times a year), so me doing a manual backup when I actually upload something to my server isn't too much of an inconvenience. This applies to a lot more than you'd think.

AWS has has a web UI showing what programs are managed and their current status. What do I do? I just SSH in, or even go to the physical machine and plug in a keyboard and monitor, then run systemctl status or ps or top. Terraform has a way to automatically reconfigure everything. What do I do? I just have a bash script I check into git which I can SCP onto a server if I have to rebuild it, which only happens once every few years. Before that, I just had a notebook I scribbled stuff into.

It turns out, a lot of the fancy automated stuff just isn't needed, and it's the fancy stuff that's hard to do.

Availability Isn't King

I don't have a highly-available setup. I don't have redundant internet providers and backup generators for backup generators or anything like Amazon has in their data centers.

It turns out, I don't need that. I'm not running a business that promises "nine nines of availability". I'm running personal projects and a DVR for my kids. If the power goes out, who cares? Like, really, who cares? My kids are going to care more about the TV not turning on and the WiFi being down than the fact my servers don't have a dedicated backup generator.

The same thing goes for updates. Oh, I need to take down the DVR and run an update? Well, I can do it when the kids are at school or when they're asleep, or doing chores, or just whenever. If the DVR isn't working for a day, I can just tell them "go read a book." It's not a big deal.

Prioritize Backups

Not everything has to be backed up, and not every backup has to be DIY. Choosing what to backup, and where to backup to is important, but not difficult. My general rule is if I can redownload or reobtain something, it's not a high priority. If I can't, it's a higher priority until I can.

For example, if I lost all of the DVR data, then yeah, it'd suck, but it's not the end of the world since I just have to wait for a day or two for my kid's programs to record new episodes.

If I lost my wedding photos, that's an issue. I can't simply recover them except from where I put them - no one else just puts them places for me. So I back those up to multiple locations - including the cloud. My favorite photos get the best backup of all time, physical copies.

That's right, a nice physical print of my favorite photos. Those prints can last decades. Which brings me to another point: you learn that backups don't have to be digital, they can be physical too.

In fact, physical backups are my absolutely most important backup mechanism. Anything I think is important enough gets a physical copy in addition to the digital copies.

Learn How To Make Minimal Access Users

I have setup users on my linux machine that have restricted shell access. Not only that, but I updated their path environment variable so they can only access the binaries I let them. These are my "application runners" who run my applications. They're barebones so that I don't have to worry about them as much, and if something funny happens I can track which program went rogue. Really nice.

Docker Actually Sucks

The biggest issue with docker is the lack of log rotation by default - which is really dumb. Web frameworks and servers have had log rotation built in for decades. But Docker? No, no log rotation here.

This has caused so many issues, especially with Java-based applications which assume writing to logs is free. I've had minecraft servers in docker take down and corrupt an entire server because I didn't setup Docker's log rotation properly. That's right, my container which is supposed to provide isolation took down my entire server. It was awful.

Also, there's the issue of so many containers, images and disk files that get strewn all over the place eating up disk space. It's a nightmare to track down and remove everything, and even when I do I'm pretty sure Docker has a cache directory somewhere it's not telling me about. It's a pain to keep from eating too much disk space.

The only saving grace is docker-compose. I can at least get things reasonable with docker-compose. I have a little YAML snippet I copy and paste everywhere to get logs working. Docker compose is nice when trying to get more complicated software running, especially when self-hosting someone else's code that was only meant to be deployed with an enterprise-grade CI/CD AWS-powered pipeline. It's incredible how overly complex software has become, and Docker and friends has really made it much worse than it needs to be.

At this point, I only use docker as a last resort, and only if I really want to get something working and I've had tons of issues with all of it's competitor's software that doesn't need docker. I basically consider Docker to be a hard drive munching virus, since that's what it feels like when you're paying for that disk space.

There's A Strong Community

The sheer volume and size of awesome self-hosted lists was mind blowing to see. And some of the projects I've seen are really insane (like getting a BlueSky view running on a Raspberry Pi).

Plus, coorporations have realy entered the market with useful tooling at affordable price points. Want to run a public website from a raspberry pi in your shoebox without exposing your IP to the general internet? Well, there's services which will create a tunnel from your machine to a cloud server which is the front for your website. There's localhost.run and CloudFlare tunnel, among others.

Also, a lot of people on social medias are pretty friendly, and it's great to see people helping others and also sharing their projects.

You Rethink Code

With Docker and the cloud it's so easy to just "add one more integration" that before you know it the code is an absolute mess of dependencies, 3rd party software, and cloud-specific functionality. And you don't even know what half of it does.

With self-hosting, because adding a new integration is so much more time consuming, you tend not to. Do I really need a MySQL or Postgres server, or can I just use SQLite? Do I really need Kafka or SQS, or can I just have a queue with a lock? By the end of it, you tend to have much simpler code, that's both easier to deploy and maintain. It's really nice.

I also have been valuing self-contained executables and releases a lot more lately. If I can just have a single file I upload and that's all I need to deploy, then I'll do that. Whether it's something that's a single binary, such as a Go or Bun executable, or a packaged release like an Erlang tar, I want that single file.

It's Easy To Get In Over Your Head

Despite how much simpler things have gotten, it's really easy to start on a project, end up screwing up a config file, and then spend several hours trying to fix it without success. I've had to reset entire servers since I screwed them up so badly. The easiest way to screw things up it is to blindly muck around with users and folder permissions until you lock yourself out of both your home directory and the ability to sudo. Not fun. Fotunately, I had backups, so it wasn't too hard to restore.

Also, some things that seem simple end up being almost impossible. For instance, some applications are just really bad at installing updates. Like atrociously bad. I spent hours trying to install a security update for a package repository before deciding to delete the repository and software entirely. Now I just put all of my builds in a file I can SFTP to and from. It's still true that software sucks, and managing sucky software sucks too. Don't get too surprised when something simple can end up being a nightmare.

It's Cheap To Get Started

What do you need to get started? An old computer, or a raspberry pi, or even the computer you're already using. You can buy a computer used if you can't beg for one that someone's throwing away or scrap one from the dumpster. Sometimes you're lucky and you can find a school, library, or business upgrading their computer systems and you can get something cheap. Or maybe you just hold onto your old one when you upgrade. Or maybe you have a Framework laptop and you just keep the mainboard after upgrading. The point is, you don't need the shinny, new multi-processor server rack in data centers, or even a new Mac mini.

Self-hosting is just about getting software you care about running on the hardware you have. Often, self-hosted software is a client-server model, and the servers are usually made to be fairly light-weight and don't need any graphical processing. Sure, it might be slower than if it was on a million dollar server farm, but that's not the point. You got it running on the hardware you have. That by itself is cool. It doesn't matter what the hardware is.

And this is where I think self-hosting and baremetal really shines. Not only is it cheap, but it reduces waste and prolongs the life of hardware. The best way to breathe new life into an old machine is to put a GUI-less (aka headless) version of linux on there, slap on a webserver, and then just use a browser to interact with the program. It turns out, those old machines are really good when they don't have to render Windows with all of its "effects." Doing HTTP requests is perfectly fast for old hardware, especially if you avoid overly complex systems like Docker.

So, if you have an old machine lying around, or you can easily get your hands on one, go ahead and give it a try. You'll learn a lot, and you may even have fun!