I have been using Nextcloud for over a year now. Started with it on Bare Metal, switched to the basic Docker Container and Collabora in its own Container. That was tricky to get running nicely. Now I have been using Nextcloud AIO for a couple of Months and am pretty happy. But it feels a little weird with all those Containers and all that overhead.
How do you guys host NC + Collabora? Some easy and best Solution?
I’d argue the opposite: it’s made it where I care very little about the dependencies of anything I’m running and it’s LESS of a delicate balancing act.
I don’t care what version of postgres or php or nginx or mysql or rust or node or python or whatever a given app needs, because it’s in the container or stack and doesn’t impact anything else running on the system.
All that matters at that point is ‘does the stack work’ and you then don’t need to spend any time thinking about dependencies or interactions.
I also treat EACH stack as it’s own thing: if it needs a database, I stand one up. If it needs some nosql it gets it’s own.
Makes maintenance of and upgrades to everything super simple, since each of the ~30 stacks with ~120 containers I’m running doesn’t in any way impact, screw with, or have dependency issues that impact anything else I’m running.
Though, in fairness, if you’re only running two or three things, then I could see how the management of the docker layer MIGHT be more time than management of the applications.
This is obviously not how any of this works: down the line those stacks will very much add-up and compete against each other for CPU/memory/IO/…. That’s inherent to the physical nature of the hardware, its architecture and the finiteness of its resources. And here come the balancing act, it’s just unavoidable.
You may not notice it as the result of having too much hardware thrown at it, I wouldn’t exactly call this a winning strategy long term, and especially not in the context of self-hosting where you directly foot the bill.
Moreover, those server components which you are needlessly multiplying (web servers, databases, application runtimes, …) have spent decades optimizing for resource pooling (with shared buffers, caching, event scheduling, …). These efforts are all thrown away when run for a single client/container further lowering (and quite drastically at that) the headroom for optimization and scaling.
Two things, I think, that are making your view and mine different.
First, the value of time. I like self-hosting things, but it’s not a 40 hour a week job. Docker lets me invest minimal time in maintenance and upkeep and restricts the blowback of a bad update to the stack it’s in. Yes, I’m using a little bit more hardware to accomplish this, but hardware is vastly cheaper than my time.
Second, uh, this is a hobby yeah? I don’t think anyone posting here needs to optimize their Nextcloud or whatever install to scale to 100,000 concurrent users that required 99.999999% uptime SLAs or anything. I mean yes, you’d certainly do things differently in those environments, but that’s really not what this is.
Using containers simplifies maintaining and deploying, and a few percent of cpu usage or a little bit of ram is unlikely to matter, unless you’re big into running everything on a Raspberry Pi Zero or something.