I run a small server with Proxmox, and I’m wondering what are your opinions on running Docker in separate LXC containers vs. running a specific VM for all Docker containers?

I started with LXC containers because I was more familiar with installing services the classic Linux way. I later added a VM specifically for running Docker containers. I’m thinking if I should continue this strategy and just add some more resources to the docker VM.

On one hand, backups seem to be easier with individual LXCs (I’ve had situations where I tried to update a Docker container but the new container broke the existing configuration and found it easiest just to restore the entire VM from backup). On the otherhand, it seems like more overhead to install Docker in each individual LXC.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    2 days ago

    This thread has raised so many questions I’d like answered:

    1. Why are people backing up containers?
    2. Why are people running docker-in-docker?
    3. I saw someone mention snapshotting containers…what’s the purpose of this?
    4. Why are people backing up docker installs?

    Seriously thought I was going crazy reading some of these, and now I’m convinced the majority of people posting suggestions in here do not understand how to use containers at all.

    Flat file configs, volumes, layers, versioning…it’s like people don’t know what these are are how to use them, and that is incredibly disconcerting.

    • ddh@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago
      1. I’m backing up LXCs, like I’d back up a VM. I don’t back up Docker containers, just their config and volumes.
      2. I don’t think anyone is doing that. We’re talking about installing Docker in LXC. One of the Proxmox rules you can live by is to not install software on the host. I don’t see the problem with installing Docker in an LXC for that reason.
      3. I’ll snapshot an LXC before running things like a dist-upgrade, or testing something that might break things. It’s very easy, so why not?
      4. I back up my LXC that has Docker installed because that way it’s easy to restore everything, including local volumes, to various points in time.
    • mr_jaaay@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Follow-up question: do you have any good resources to start with for a simple overview on how we should be using containers? I’m not a developer, and from my experiences most documentation on the topic I’ve come across targets developers and devops people. As someone else mentioned, I use docker because it’s the way lots of things happen to be packaged - I’m more used to the Debian APT way of doing things.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I don’t have anything handy, but I see your point, and I’d shame lazy devs for not properly packaging things maybe 😂

        You mentioned you use Proxmox, which is already an abstraction on bare-metal, so that’s about as easy as easy an interface as I can imagine for a hosted machine without using something like Docker Desktop and using it to manage a machine remotely (not a good idea).

        As a develop, I guess I was slightly confused on some suggestions on ways to use things being posted in this sub, but some of the responses I guess clarify that. There isn’t enough simplicity in explaining the “what” of containers, so people just use them the simplest way they understand, which also happens to be the “wrong way”. It’s kind of hard to grasp that when you live with these things 24/7 for years. Kind of a similar deal with networking solutions like Tailscale where I see people installing it everywhere and not understanding why that’s a bad idea 😂

        So save you a lot of learning, I’ll just not go down a rabbit hole if you just want something to work well. Ping back here if you get into a spot of trouble, and I’ll definitely hop in to give a more detailed explanation on a workflow that is more effective than what it seems most people in here are using.

        In fact, I may have just been inspired to do a write up on it.

        • mr_jaaay@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Fair enough, would love to read something like this :-)

          Yeah, I’ve been into Linux for 20 years, sometimes a bit on/off, as an all-around-sysadmin in mainly Windows places. And learned just enough of Docker to use it instead of apt - which I’d prefer, but as you said, many newer services don’t exist in debian repos or as .deb packages, only docker or similar.

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            If you’re familiar with Linux, just read the Dockerfile of any given project. It’s literally just a script for running a thing. You can take that info and install how you’d like if needed.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I’m guessing people are largely using the wrong terminology for things that make more sense, like backing up/snapshotting config and data that containers use. Maybe they’re also backing up images (which a lot of people call “containers”), just in case it gets yanked from wherever they got it from.

      That said, yeah, someone should write a primer on how to use Docker properly and link it in the sidebar. Something like:

      1. docker-compose or podman for managing containers (a lot easier than docker run)
      2. how to use bind mounts and set permissions, as well as sharing volumes between containers (esp. useful if your TLS cert renewal is a separate container from your TLS server)
      3. docker networks - how to get containers to talk w/o exposing their ports system-wide (I only expose two ports, Caddy for TLS, and Jellyfin because my old smart TV can’t seem to handle TLS)
      4. how tags work - i.e. when to use latest, the difference between <image>:<major>.<minor>.<patch> and <image>:<major>, etc, and updating images (i.e. what happens when you “pull”)

      I’ve been using docker for years, but I’m sure the are some best practices I am missing since I’m more of a developer than a sysadmin.