(I know that this is about selfhosting, but I am forced to use cloud services due to it not being viable to selfhost because of DSL internet speeds in my house, and I need this to be accessible outside my home.)

I recently made a Linode account (and got the free credit), and I am planning on only paying $5 a month if I can. I noticed that Nextcloud AIO (from Linode "Marketplace") ran very well on the lowest shared CPU plan (1GB ram, 25GB storage, 1 CPU core (CPU seems to me an AMD Epyc?)).

Will it be okay for me to host a Wordpress website and a Nextcloud instance from the same server? I will be using Docker/Podman, and only I will be using the Nextcloud instance.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    1 year ago

    If you do a barebones install / without the Docker overhead it might work.

        • Solar Bear@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          1 year ago

          Convincing argument, but unfortunately a cursory Google search will reveal he was right. There is very little CPU overhead. The only real consideration is a bite extra storage and RAM to store and load the redundant dependencies of the container.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You're also ignoring the amount of work the kernel has to do to shift UUIDs around, the resources that the docker daemon itself uses and amounts of redundant stuff to make sure those processes are running that would usually be handled by systemd on a clean system. Yes, containerization is much better nowadays but still overhead.

            • StarDreamer@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Can't comment much about the docker side since it's not something I'm familiar with.

              For the kernel part, assuming what you're referring to as UUIDs is the pid namespace mechanism, I'm failing to see how that would add overhead with containers. The namespace lookups/permission checks are performed regardless of whether the process is in a container or not. There is no fast path for non-containerized processes. The worst overhead that this could add is probably one extra ptr chase in the namespace linked list.