Hi! I’m starting out with self-hosting. I was setting up Grafana for system monitoring of my mini-PC. However, I ran into issue of keeping credentials secure in my Docker Compose file. I ended up using Docker Swarm since it was the path of least resistance. I’ve managed to set up Grafana/Prometheus/Node stack and it’s working well.

However, before continuing with Docker Swarm, I want to check if this is a good idea or will I potentially dig myself into a corner? Some of the options I’ve found while searching:

  • Continue with Docker Swarm and look into automation of stack/swarm in future

    • Ansible playbook has plugins for Docker Swarm.
  • Self-hosted vault: I want to avoid hosting my own secret/password manager at the moment.

  • Kubernetes (k8s / k3s) - I don’t wanna 😭

    • More seriously, I’m actually learning this for work but don’t see the point of implementing it at home. The extra overhead doesn’t seem worth it for a single node cluster.
  • Live dangerously - Store crdentials in plaintext. Also use admin as password for everything

Edit: Most of the services I’m planning on hosting will likely be a single replica service.

  • emhl@feddit.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    What are your reasons for using docker swarm instead normal docker if you don’t want to do replication?

    • UncommonBagOfLoot@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      To use Docker secrets so that the secrets are encrypted on the host. Using Docker Swarm was the path of least resistance to set up my system monitoring stack.

      Docker Compose can use secrets without Swarm, but my understanding is that the those are in plaintext on the host.

      • i_am_not_a_robot@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        12
        ·
        4 months ago

        Docker Swarm encryption doesn’t work for your use case. The documentation says that the secret is stored encrypted but can be decrypted by the swarm manager nodes and nodes running services that use the service, which both apply to your single node. If you’re not having to unlock Docker Compose on startup, that means that the encrypted value and the decryption key live next to each other on the same computer and anyone who has access to the encrypted secrets can also decrypt them.

      • Lem453@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 months ago

        When I was starting out I almost went down the same pathway. In the end, docker secrets are mainly useful when the same key needs to be distributed around multiple nodes.

        Storing the keys locally in an env file that is only accessible to the docker user is close enough to the same thing for home use and greatly simplifies your setup.

        I would suggest using a folder for each stack that contains 1 docker compose file and one env file. The env file contains passwords, the rest of the env variables are defined in the docker compose itself. Exclude the env files from your git repo (if you use this for version control) so you never check in a secret to your git repo (in practice I have one folder for compose files that is on git and my env files are stored in a different folder not in git).

        I do this all via portainer, it will setup the above folder structure for you. Each stack is a compose file that portainer pulls from my self hosted gitea (on another machine). Portainer creates an env file itself when you add the env variables from the gui.

        If someone gets access to your system and is able to access the env file, they already have high level access and your system is compromised regardless of if you have the secrets encrypted via swarm or not.