Just wondering what tools and techniques people are using to keep on top of updates, particularly security-related updates, for their self-hosting fleet.
I’m not talking about docker containers - that’s relatively easy. I have Watchtower pull (not update) latest images once per week. My Saturday mornings are usually spent combing through Portainer and hitting the recreate button for those containers with updated images. After checking the service is good, I manually delete the old images.
But, I don’t have a centralised, automated solution for all my Linux hosts. I have a few RasPis and a bunch of LXCs on a pair of Proxmox nodes, all running their respective variation of Debian.
Not a lot of this stuff is exposed direct to the internet - less than a handful of services, with the rest only accessible over Wireguard. I’m also running OPNsense with IPS enabled, so this problem isn’t exactly keeping me up at night right now. But, as we all know, security is about layers.
Some time ago, on one of my RasPis, I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don’t relish the idea of doing that another 40 or so times for the rest of my fleet.
I also don’t want all of those hosts grabbing updates at around the same time, smashing my internet link (yes, I could randomise the cron job within a time range, but I’d rather not have to).
I have a fledgling Ansible setup that I’m just starting to wrap my head around. Is that the answer? Is there something better?
Would love to hear how others are dealing with this.
Cheers!
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters LXC Linux Containers Plex Brand of media server package RPi Raspberry Pi brand of SBC SBC Single-Board Computer VPS Virtual Private Server (opposed to shared hosting)
4 acronyms in this thread; the most compressed thread commented on today has 20 acronyms.
[Thread #47 for this sub, first seen 15th Aug 2023, 07:25] [FAQ] [Full list] [Contact] [Source code]
A few simple rules make it quite simple for me:
- Firstly, I do not run anything critical myself. I cannot guarantee that I will have time to resolve issues as they come up. Therefore, I tolerate a moderate risk of a borked update.
- All servers run the same be OS. Therefore, I don’t have to resolve different issues for different machines. There is then the risk that one update will take them all out, but see my first point.
- That OS is stable, in my case Debian so updates are rare and generally safe to apply without much thought.
- Run as little as possible on bare metal and avoid third party repos or downloading individual binaries unless absolutely necessary. Complex services should run in containers and update by updating the container image.
- Run unattended-upgrades on all of them. I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it’s just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server, because it has ZFS through DKMS on it so it’s too risky to blindly apply.
- Have postfix set up so that unattended-upgrades can email me when a reboot is required. I reboot only when I know I’ll have some time to fix anything that breaks. For the blacklisted packages I will get an email that they’ve been held back so I know that I need to update manually.
This has been working great for me for the past several months.
For containers, I rely on Podman auto update and systemd. Actually my own script that imitates its behavior because I had issues with Podman pulling images which were not new, but which nevertheless triggered restarts of the containers. However, I lock the major version number manually and check and update major versions manually. Major version updates stung me too much in the past when I’d update them after a long break.
I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it’s just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server
Yep, this is what I was thinking I’d have to do. So, from your perspective, Unattended Updates is still the best way to achieve this on Debian, with the right config? Cheers.
Correct. And getting the right configuration is pretty easy. Debian has good defaults. The only changes I make are configuring it to send emails to me when updates are installed. These emails will also then tell you if you need to reboot in subject line which is very convenient. As I said I also blacklist kernel updates on the server that uses ZFS as recompiling the modules causes inconsistencies between kernel and user space until a reboot. If you set up emails, you will also know when these updates are ready to be installed because you’ll be notified that they’re being held van.
So yea, I strongly recommend unattended-upgrades with email configured.
Edit: you can also make it reboot itself if you want to. Might be worth it on devices that don’t run anything very important and that can handle downtime.
Yep, cool. The single host I have with UU running on it does send the listchanges via email already, which I’ve found useful.
Well, time to refresh my memory on how I have it setup and build up an Ansible playbook to repeat success everywhere else.
Cheers.
apk upgrade -U in a cronjob daily and hope it does not break.
I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don’t relish the idea of doing that another 40 or so times for the rest of my fleet.
automate it! I run unattended-upgrades on dozens of servers without any problems: [1] [2]. Configuration is actually really simple.
I use other methods for things that are not distribution packages [3], but for APT upgrades unattended-upgrades is the only correct™ solution.
Yep, I’m working on a test Ansible playbook now. Thanks for those repo links - very useful stuff in there.
I set up flexo for Arch Linux update caching and squid proxy for Alpine, Debian. This stops me from having to download the same files over and over.
Yeah - a caching proxy would alleviate the pain on internet link, for sure. So flexo is similar to Unattended Upgrades for Debian, yeah? Automates pacman?
No, Flexo is not like Unattended Upgrades. Flexo just downloads packages in a cache for you to download them locally using pacman as usual. It’s mainly to increase download speeds and decrease doubledownloadsing the same files in one network to different clients. Unattended Upgrades is actually installing security updates automatically without user input. This is by design not supported and not possible on Arch Linux.
Ah, gotcha. Missed that bit about Squid being for Alpine and Debian. Makes more sense now. Cheers.