• 2 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
rss



  • ZFS is still the de-facto standard of a reliable filesystem. It’s super stable, and annoyingly strict on what you can do with it. Their Raid5 and Raid6 support are the only available software raids in those levels that are guaranteed to not eat your data. I’ve run a TrueNAS server with Raid6 for years now, with absolutely no issues and tens of terabytes of data.

    But, these copy on write filesystems such as ZFS or btrfs are not great for all purposes. For example running a Postgres server on any CoW filesystem will require a lot of tweaking to get reasonable speeds with the database. It’s doable, but it’s a lot of settings to change.

    About the code quality of Linux filesystems, Kent Overstreet, the author of the next new CoW filesystem bcachefs, has a good write-up of the ups and downs:

    • ext4, which works - mostly - but is showing its age. The codebase terrifies most filesystem developers who have had to work on it, and heavy users still run into terrifying performance and data corruption bugs with frightening regularity. The general opinion of filesystem developers is that it’s a miracle it works as well as it does, and ext4’s best feature is its fsck (which does indeed work miracles).
    • xfs, which is reliable and robust but still fundamentally a classical design - it’s designed around update in place, not copy on write (COW). As someone who’s both read and written quite a bit of filesystem code, the xfs developers (and Dave Chinner in particular) routinely impress me with just how rigorous their code is - the quality of the xfs code is genuinely head and shoulders above any other upstream filesystem. Unfortunately, there is a long list of very desirable features that are not really possible in a non COW filesystem, and it is generally recognized that xfs will not be the vehicle for those features.
    • btrfs, which was supposed to be Linux’s next generation COW filesystem - Linux’s answer to zfs. Unfortunately, too much code was written too quickly without focusing on getting the core design correct first, and now it has too many design mistakes baked into the on disk format and an enormous, messy codebase - bigger that xfs. It’s taken far too long to stabilize as well - poisoning the well for future filesystems because too many people were burned on btrfs, repeatedly (e.g. Fedora’s tried to switch to btrfs multiple times and had to switch at the last minute, and server vendors who years ago hoped to one day roll out btrfs are now quietly migrating to xfs instead).
    • zfs, to which we all owe a debt for showing us what could be done in a COW filesystem, but is never going to be a first class citizen on Linux. Also, they made certain design compromises that I can’t fault them for - but it’s possible to better. (Primarily, zfs is block based, not extent based, whereas all other modern filesystems have been extent based for years: the reason they did this is that extents plus snapshots are really hard).

    I started evaluating bcachefs in my main workstation when it arrived to the stable kernels. It can do pretty good raid-1 with encryption and compression. This combination is not really available integrated to the filesystem in anywhere else but zfs. And zfs doesn’t work with all the kernels, which prevents updating to the latest and greatest. It is already a pretty usable system, and in a few years will probably take the crown as the default filesystem in mainstream distros.







  • I’ve been digging into the settings of this printer and, sadly the only send it can do is as a fax… It’s the entry model, been serving us for years very nicely. It even connects to the internet, but misses features such as email, smb or ftp. For me this looks like something an open source firmware could fix. It has enough processing power to possibly run a lightweight Linux distribution, so installing one that would enable modern communication protocols doesn’t seem impossible.



  • Of course. My setup now is a Proxmox server + a NAS. What I’m planning to do is to install a service for this to Proxmox, then have the files synced over NFS to the NAS, which then backs them up every night to Backblaze. And of course I need to have the paper copies too, but to be able to search, tag and archive the documents is great when you need to remember a thing X that was mentioned in a paper I got back in 2014.




  • Installed it because of this thread to my homelab today. I never really managed my phone images in any way, never uploaded them anywhere. This was the first time. About 5 gigabytes of images and videos were synced to my NAS in a few minutes, now I can search them and all that. It’s a pretty cool setup, although the installation is a bit tricky if you don’t go to the path they give you. I run a Postgres server in Proxmox, and you have to install just the right version of pgvecto.rs for the system to work.

    Browsing the issues I was able to figure out what went wrong, and after downgrading, no issues.



  • As said in the thread, you need some kind of tunnel that stays up and doesn’t need to be fixed if the internet goes down.

    Wireguard, or if wanting super easy setup, Tailscale version of Wireguard is great for this. Now you have a private IP address in your VPN network to your home server, that stays up and answers to HTTP. Next thing you need is a cheap VPS somewhere with a public IP address. When that is running, and is in the Wireguard network so you can access your home server from the VPS, you need a Nginx proxy in the public server. Either do it by hand, or use a service such as the Nginx Proxy Manager to handle the proxy setup.

    How it basically works is you register a domain name (A, CNAME) to the public VPS service, then with Nginx you setup that anything coming in to the domain X should be proxied to the VPN IP address Y and port Z. Now you can add HTTPS to this domain and get a Let’s Encrypt certificate for it. You can, again, do this manually with Nginx, or let Nginx Proxy Manager handle it for you.

    Finally. Stay safe. If you really open services to public internet from your home, be very sure to have all the latest updates and use strong passwords in all of them. Additionally, you can use the home services directly from the Wireguard/Tailscale network by accessing them using the private IP addresses. Your computer should just be in the same network with them.


  • I’m running it in my homelab for projects I do not (yet) push anywhere public, and projects containing private items such as ssh keys. It is snappy and has a ton of features. I can imagine when the federation support works, one can set up their own git forge and contribute more easily to other forges no matter what software they run.

    And, to be honest, that is already how git works if you use the email workflow. Here we just get a web based flow with federated issues and pull requests. But if email is enough for you, you can have a full federation with email and git.


  • I borrowed an installation CD from the local library around 1998. It was RedHat 5.x, and I started messing around with it due to me being interested in alternative operating systems. Before it, I had OS/2 Warp 3.0 in our IBM Pentium 100 MHz family computer which didn’t really do it for me to be honest.

    It took weeks to get anything working with Linux. I went to the library, borrowing books. In our middle school we had an internet connection, so I utilized it to learn how to configure modelines correctly to get X11 running.

    When it did finally run, the default window manager was FVWM95, almost like Windows 95!

    I used OSX a few years in the power PC times, just to switch back to Linux around 2008.

    Edit: my real love for Linux started when I got Debian running. RedHat didn’t have anything comparable to apt those days. You needed to download RPM packages manually with all the dependencies, while apt just worked with one command.