• 1 Post
  • 29 Comments
Joined 11 months ago
cake
Cake day: August 15th, 2023

help-circle
rss
  • I have many friends which are vegan and we live in an area + work in an industry with a comparatively high amount of people with such a diet. We have talked about the topic at lengths, and my understanding is that in order to have a healthy diet you have to do quite a bit of research and spend time planning your meals. And then going out on a dinner is often a pain, although this has improved in the recent years.

    We eat much less meat than the general public. But going the next step and eliminating meat and then diary products is not trivial. Unless you have less responsibilities and or more prior knowledge to get you up to speed. I simply do not have the time for that, I have a small kid to take care of. And we often struggle to plan enough meals ahead of time in the short period of time between finishing work and doing groceries.

    It might sound like an excuse to you. It feels the same on my end, when my concerns are dismissed with some hand waving by people which usually are in a completely different place in their life than me.


  • Running ZFS on consumer SSDs is absolute no go, you need datacenter-rated ones for power loss protection. Price goes brrrrt €€€€€

    I too had an idea for a ssd-only pool, but I scaled it back and only use it for VMs / DBs. Everything else is on spinning rust, 2 disks in mirror with regular snapshots and off-site backup.

    Now if you don’t care about your data, you can just spin up whatever you want in a 120€ 2TB ssd. And then cry once it starts failing under average load.

    Edit: having no power loss protection with ZFS has an enormous (negative) impact on performance and tanks your IOPS.



  • You are completely ignoring the fact, that for many it is too time consuming and involved to go vegan. And then you are imposing your belief that others should invest the same amount of resources, be it time or money, or they are worse human beings not caring about animals. In other words, being able to switch your diet is usually a sign of at least slight financial privilege. I just had some tofu so you don’t have to preach to me. But let others be and do not compare veganism to anti-genocide. It is absolutely ridiculous.



  • I know what you mean. Most people mean well, some are a bit too aggressive, but probably also mean well. I honestly sometimes roll my eyes when I start reading about tailscale, cloudflare tunnels etc. The main thing is not to expose anything you don’t absolutely need to expose.

    For access from the outside the most you should need is a random high port forwarded for ssh into a dedicated host (can be a VM / container if you don’t have a spare RaspberryPi). And Wireguard on a host which updates the server package regularly. So probably not on your router, unless the vendor is on top of things.

    Regarding ansible and documenting, I totally get your point. Ten years ago I was an absolute Linux noob and my flatmate had to set up an IRC bouncer on my RPi. It ran like that for a few years and I dared not touch anything. Then the SD card died and took down the bouncer, dynDNS and a few other things running on it.

    It takes me a lot of time to write and test my ansible playbooks and custom roles, but every now and then I have to move services between hosts. And this is an absolute life saver. Whenever I’m really low on time and need to get something up and running, I write down things in a readme in my infra repository and occasionally I would go through my backlog when I have nothing better to do.


  • One word of advice. Document the steps you do to deploy things. If your hardware fails or you make a simple mistake, it will cost you weeks of work to recover. This is a bit extreme, but I take my time when setting things up and automate as good as possible using ansible. You don’t have to do this, but the ability to just scrap things and redeploy gives great peace of mind.

    And right now you are reluctant to do this because it’s gonna cost you too much time. This should not be the case. I mean, just imagine things going wrong in a year or two and you can’t remember most things you know now. Document your setup and write a few scripts. It’s a good start.







  • Oh okay that’s a lot of power. For reference, I just set up an old Haswell PC as a NAS, idling at 25W (can’t get to low Package C states) and usually at 28-30 running light workloads on an SSD pool. My plan was to add a 5 disk cage and at least 3 HDDs, with Raidz2 and 5 disks being the mid term goal. Absolutely unnecessary and a huge waste. I settled on less but larger disks, and in mirror I can get 12-18 TB usable space for under 500€. Less noise and power draw too.


  • Look for 5W idle consumption boards + CPU combos which go down to package C6+ state. HardwareLuxx has a spreadsheet with various builds focusing on low power. Sell half your disks, go mirror or Raidz1. Invest the difference in off-site vps and or backup. Storage on any SBC is a big pain and you will hit the sata connector / IO limits very soon.

    The small NUC form factors are also fine, but if your problem is power you can go very low with a good approach and the right parts. And you’ll make up for any new investments within the first year.




  • How does this work, some loophole or a business customer? You can drop some info in a private message if you don’t f feel like posting in public. Re server part deals, I am not sure if this is always the case, but the current selection of disks is 90% helium (Exos etc) HDDs, a few IronWolfs which are too large (20TB) and basically that’s it. My DIY NAS is unfortunately in the apartment and I’m reluctant to try He disks due to the intensive sound profile.



  • Wifi pretty much excludes k*s and I assume that swarm and Nomad would be impacted by blips in the wireless connectivity. You can try how things work out with a load balancer / reverse proxy on a wired connection, which then checks the downstream services and routes the request to available instances.

    Please look into Wifi-specific issues related to the various orchestration platforms before deciding to try one out. Hypervisor is usually a win win, until you try to do failover.


  • I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?

    So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.

    • The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
    • Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
    • Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
    • You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).

    Basically I have a very simple host setup with only a few packages installed. Then I would remotely configure and start up my containers, expose ports etc. And I can cleanly define where my configuration is, back up only that particular folder for example and keep the rest of the setup easy to redeploy.