Just wondering what tools and techniques people are using to keep on top of updates, particularly security-related updates, for their self-hosting fleet.

I’m not talking about docker containers - that’s relatively easy. I have Watchtower pull (not update) latest images once per week. My Saturday mornings are usually spent combing through Portainer and hitting the recreate button for those containers with updated images. After checking the service is good, I manually delete the old images.

But, I don’t have a centralised, automated solution for all my Linux hosts. I have a few RasPis and a bunch of LXCs on a pair of Proxmox nodes, all running their respective variation of Debian.

Not a lot of this stuff is exposed direct to the internet - less than a handful of services, with the rest only accessible over Wireguard. I’m also running OPNsense with IPS enabled, so this problem isn’t exactly keeping me up at night right now. But, as we all know, security is about layers.

Some time ago, on one of my RasPis, I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don’t relish the idea of doing that another 40 or so times for the rest of my fleet.

I also don’t want all of those hosts grabbing updates at around the same time, smashing my internet link (yes, I could randomise the cron job within a time range, but I’d rather not have to).

I have a fledgling Ansible setup that I’m just starting to wrap my head around. Is that the answer? Is there something better?

Would love to hear how others are dealing with this.

Cheers!

  • @Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    20
    edit-2
    10 个月前

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LXC Linux Containers
    Plex Brand of media server package
    RPi Raspberry Pi brand of SBC
    SBC Single-Board Computer
    VPS Virtual Private Server (opposed to shared hosting)

    4 acronyms in this thread; the most compressed thread commented on today has 20 acronyms.

    [Thread #47 for this sub, first seen 15th Aug 2023, 07:25] [FAQ] [Full list] [Contact] [Source code]

  • dr_robot
    link
    fedilink
    410 个月前

    A few simple rules make it quite simple for me:

    • Firstly, I do not run anything critical myself. I cannot guarantee that I will have time to resolve issues as they come up. Therefore, I tolerate a moderate risk of a borked update.
    • All servers run the same be OS. Therefore, I don’t have to resolve different issues for different machines. There is then the risk that one update will take them all out, but see my first point.
    • That OS is stable, in my case Debian so updates are rare and generally safe to apply without much thought.
    • Run as little as possible on bare metal and avoid third party repos or downloading individual binaries unless absolutely necessary. Complex services should run in containers and update by updating the container image.
    • Run unattended-upgrades on all of them. I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it’s just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server, because it has ZFS through DKMS on it so it’s too risky to blindly apply.
    • Have postfix set up so that unattended-upgrades can email me when a reboot is required. I reboot only when I know I’ll have some time to fix anything that breaks. For the blacklisted packages I will get an email that they’ve been held back so I know that I need to update manually.

    This has been working great for me for the past several months.

    For containers, I rely on Podman auto update and systemd. Actually my own script that imitates its behavior because I had issues with Podman pulling images which were not new, but which nevertheless triggered restarts of the containers. However, I lock the major version number manually and check and update major versions manually. Major version updates stung me too much in the past when I’d update them after a long break.

    • @DeltaTangoLima@reddrefuge.comOP
      link
      fedilink
      English
      110 个月前

      I deploy the configuration via Ansible. Since they all run the same OS, I only need to figure out the right configuration once and then it’s just a matter of using Ansible to deploy it everywhere. I do blacklist kernel updates on my main server

      Yep, this is what I was thinking I’d have to do. So, from your perspective, Unattended Updates is still the best way to achieve this on Debian, with the right config? Cheers.

      • dr_robot
        link
        fedilink
        2
        edit-2
        10 个月前

        Correct. And getting the right configuration is pretty easy. Debian has good defaults. The only changes I make are configuring it to send emails to me when updates are installed. These emails will also then tell you if you need to reboot in subject line which is very convenient. As I said I also blacklist kernel updates on the server that uses ZFS as recompiling the modules causes inconsistencies between kernel and user space until a reboot. If you set up emails, you will also know when these updates are ready to be installed because you’ll be notified that they’re being held van.

        So yea, I strongly recommend unattended-upgrades with email configured.

        Edit: you can also make it reboot itself if you want to. Might be worth it on devices that don’t run anything very important and that can handle downtime.

        • @DeltaTangoLima@reddrefuge.comOP
          link
          fedilink
          English
          110 个月前

          Yep, cool. The single host I have with UU running on it does send the listchanges via email already, which I’ve found useful.

          Well, time to refresh my memory on how I have it setup and build up an Ansible playbook to repeat success everywhere else.

          Cheers.

  • @dtc@lemmy.pt
    link
    fedilink
    English
    4
    edit-2
    10 个月前

    I’m in the process of migrating my servers to NixOS. It takes a lot of time and the learning curve is steep, but I have one config shared for all the servers and PCs. I have setup the servers to automatically pull the latest configuration everyday and even restart if there’s a kernel update.

    This means I just need to update my laptop and push the changes to the repository, and all the servers will also update.

    I haven’t had this setup long enough to know if things will break unexpectedly with updates tho. NixOS has a great feature where you can rollback to a previous configuration (generation) with a single command. You can always keep using containers to isolate updates, if you want (Nix allows you to declare those in the config as well).

    As an example, you can take a look at my config.

    EDIT: Systemd timers have an option to randomize the time a service runs, I use it all the time. The option for Nix’s config pulling is using systemd timers, so you can use that.

    • @DeltaTangoLima@reddrefuge.comOP
      link
      fedilink
      English
      1
      edit-2
      10 个月前

      OK, that does sound really good. Reminds me of a CVS & Perl based config management system I worked on many (many) years ago (was invented by one of the other sys admins before I got there). That was for OpenBSD, but similar concept - centralised config, pushed to client, with automation of service/server restarts as required.

      I might have to consider NixOS for a long-term strategy. Cheers.

  • @NonDollarCurrency@monero.town
    link
    fedilink
    English
    310 个月前

    I set up flexo for Arch Linux update caching and squid proxy for Alpine, Debian. This stops me from having to download the same files over and over.

    • @DeltaTangoLima@reddrefuge.comOP
      link
      fedilink
      English
      110 个月前

      Yeah - a caching proxy would alleviate the pain on internet link, for sure. So flexo is similar to Unattended Upgrades for Debian, yeah? Automates pacman?

      • Dataprolet
        link
        fedilink
        English
        210 个月前

        No, Flexo is not like Unattended Upgrades. Flexo just downloads packages in a cache for you to download them locally using pacman as usual. It’s mainly to increase download speeds and decrease doubledownloadsing the same files in one network to different clients. Unattended Upgrades is actually installing security updates automatically without user input. This is by design not supported and not possible on Arch Linux.

  • @vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    210 个月前

    I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don’t relish the idea of doing that another 40 or so times for the rest of my fleet.

    automate it! I run unattended-upgrades on dozens of servers without any problems: [1] [2]. Configuration is actually really simple.

    I use other methods for things that are not distribution packages [3], but for APT upgrades unattended-upgrades is the only correct™ solution.

  • Jeena
    link
    fedilink
    English
    110 个月前

    I consolidated everything on to one a bit beefier VPS so that if I update that one VPS 90% of stuf updates itself. The rest are 3 RPis which run Home Assistant in different places, those I go throuch manually and update when I see that there is a new update.

    • @DeltaTangoLima@reddrefuge.comOP
      link
      fedilink
      English
      110 个月前

      Wow. No concerns an update will bork that 90% of your fleet that sits on the VPS? That’s one reason I’m loving LXCs - anything that screws with one specific service doesn’t pose a risk to any other service.

      • Jeena
        link
        fedilink
        English
        4
        edit-2
        10 个月前

        It hasn’t in the last 10 or so years, but if it does it’s not a problem I have backups which I can get up and running within half an hour.

        I’m not running anything mission critical, just single user instances of Mastodon, Lemmy, Nextcloud, PeerTube, Matrix, my website, Firefox Sync, some old static websites of mine and my sister which are basically archived. So even if it’s down for a week, nobody but me cares.

        • @DeltaTangoLima@reddrefuge.comOP
          link
          fedilink
          English
          110 个月前

          Yep, understood. My setup is a little more “mission” critical, if you consider availability of my Plex, *arrs, Home Assistant and Pi-holes being the mission, and the critical bit being that I have impatient teenagers in the house.

        • @Haui@discuss.tchncs.de
          link
          fedilink
          English
          110 个月前

          That’s actually insanely cool! I‘m on a similar path rn. 10+ containers running services, thinking of adding peertube, lemmy and co as well as my webpages. But its still a honeserver so I‘d need to go vps at some point.

          Did you start at home or directly go to vps? How was your journey?

          In any case, thanks for sharing and have a good one. :)

          • Jeena
            link
            fedilink
            English
            110 个月前

            Actually my goal is to move everything to a home lab server, but my last one broke a year ago and I didn’t want to spent all the money at once to buy a new one so i just moved everything to the VPS where I already had my website.

            • @Haui@discuss.tchncs.de
              link
              fedilink
              English
              110 个月前

              Hrhr thats actually very funny. You are basically the other car in the meme driving in the opposite direction. How did you keep it from being hacked?

              • Jeena
                link
                fedilink
                English
                110 个月前

                Just normal, keep everything up to date and don’t fuck with scriptkiddies.