Hi. Since yesterday i selfhosted all my stuff with a raspberry pi and two odroids. Everything works ok, but after i read about a few apps that are not supported by the arm-architecture of the SBCs and about the advantages of the backup-solution in proxmox, i bought a little server (6500T/8GB/250GB) to try proxmox.
Installed proxmox, but now - before i install my first VM - i have a few questions:
a) What Linux OS do i take? Ubuntu Server?
b) Should it be headless?
The server is in the cellar of my house, so would there be any advantages of installing an OS with a GUI?
I run Debian on all my vms, they have no GUI installed at all. I manage all of them over SSH
Yes, that is what i am used to.
I guess headless is better for performance and i do not see an advantage at all.
Another question: Why do you have several debians-vm's? You also could take one, right?
I use multiple VMs, and group things either by security layer or by purpose.
When organising by purpose, I have a VM for reverse proxies. Then I have a VM for middleware/services. Another VM (or multiple) for database(s). Another VM for backend/daemon type things.
Most of them end up running docker, but still.
Lets me tightly control access between layers of the application (if the reverse proxy gets pwnd, the damage is hopefully contained there. If they get through that, the only get to the middleware. Ideally the database is well protected. Of course, none of that really matters when there's a bug in my middleware code!)
Another way to do it is by purpose.
Say you have a media server things, network management things, CCTV things, productivity apps etc.
Grouping all the media server things in a VM means your DNS or whatever doesn't die when you wiff an update to the media server. Or you don't lose your CCTV when you somehow link it's storage directory into the media server then accidentally delete it. If that makes sense.
Another way might be by backup strategy.
A database hopefully has point in time backup/recovery systems in place. Whereas a reverse proxy is just some config (hopefully stored on GitHub) and can easily be rebuilt from scratch.
So you could also separate things by how "live" the data is, or how often something is backed up, or how often something gets reconfigured/tweaked/updated.
I use VMs to section things out accordingly.
Takes a few extra GB of storage/memory, has a minor performance impact. But it limits the amount of damage my dumb ass can do.
deleted by creator
I run one VM which some small docker containers go on, but whenever I'm trying something out it's always in a Debian or Ubuntu VM - things just usually work easier. If it turns out to be a service I'm serious about running, then I'll sometimes spend the time to set it up in it's own LXC. Even a single Docker container.
I much prefer each service in it's own VM or LXC - for that same reason. Easier backups, easier to move to other nodes, easier to see the resources being used.
@moddy with that processor and your 8GB you have plenty of room to play with multiple VM's. Headless Ubuntu is probably the best place to start just because of the volume of results you get when googling issues. Enjoy.
Ok, i will have to check out what a LXC is before i start, but that helped a lot. Thanks
It's a bit like docker, in that it's a sort of isolated system, but you use it more like a virtual machine (VM). It's lighter than a VM because it uses the host kernel so you can run lots of them with out consuming too many resources.
In the Proxmox web interface, up in the top right corner there's a "Create CT" button. If you click through all that (once again I am recommending Ubuntu) you'll have your first LXC container up in a couple of minutes - the quick creating is another advantage over VM's. One of the joys of your excellent choice of Proxmox as a base is that you can easily experiment with such things.
I run a vm for each service, a php vm, a mysql vm, etc. But yes you could just have a big vm run everything
At that point why even run proxmox.
As I wrote in my other reply, you typically want a separate VM for each service so that the OS configurations don't conflict, and also so that you can shut down the VM for one service (e.g. for installing updates or migrating to another cluster node) without causing downtime to other services.