Hello! I have been struggling through a few tutorials on getting a lemmy instance to work correctly when setup with Docker. I have it mostly done, but there are various issues each time that I do not have the knowledge to properly correct. I am familiar with Docker, and already have an Oracle VPS set up on ARM64 Ubuntu. I already have portainer and an NGINX proxy set up and working okay. I have an existing lemmy instance “running” but not quite working. My best guess here would be to have someone assist with setting up the docker-compose to work with current updates/settings, as well as the config.hjson.
TIA, and I cant wait to have my own entry into the fediverse working right!
Working Setup files, for my ARM64 Ubuntu host server. The postgres, lemmy, lemmy-ui, and pictrs containers all are on the lemmyinternal network only. The nginx:1-alpine container is in both networks. docker-compose.yml
spoiler
version: "3.3" # JatNote = Note from Jattatak for working YML at this time (Jun8,2023) networks: # communication to web and clients lemmyexternalproxy: # communication between lemmy services lemmyinternal: driver: bridge #JatNote: The Internal mode for this network is in the official doc, but is what broke my setup # I left it out to fix it. I advise the same. # internal: true services: proxy: image: nginx:1-alpine networks: - lemmyinternal - lemmyexternalproxy ports: # only ports facing any connection from outside # JatNote: Ports mapped to nonsense to prevent colision with NGINX Proxy Manager - 680:80 - 6443:443 volumes: - ./nginx.conf:/etc/nginx/nginx.conf:ro # setup your certbot and letsencrypt config - ./certbot:/var/www/certbot - ./letsencrypt:/etc/letsencrypt/live restart: always depends_on: - pictrs - lemmy-ui lemmy: #JatNote: I am running on an ARM Ubuntu Virtual Server. Therefore, this is my image. I suggest using matching lemmy/lemmy-ui versions. image: dessalines/lemmy:0.17.3-linux-arm64 hostname: lemmy networks: - lemmyinternal restart: always environment: - RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info" volumes: - ./lemmy.hjson:/config/config.hjson depends_on: - postgres - pictrs lemmy-ui: #JatNote: Again, ARM based image image: dessalines/lemmy-ui:0.17.3-linux-arm64 hostname: lemmy-ui networks: - lemmyinternal environment: # this needs to match the hostname defined in the lemmy service - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536 # set the outside hostname here - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236 - LEMMY_HTTPS=true depends_on: - lemmy restart: always pictrs: image: asonix/pictrs # this needs to match the pictrs url in lemmy.hjson hostname: pictrs networks: - lemmyinternal environment: - PICTRS__API_KEY=API_KEY user: 991:991 volumes: - ./volumes/pictrs:/mnt restart: always postgres: image: postgres:15-alpine # this needs to match the database host in lemmy.hson hostname: postgres networks: - lemmyinternal environment: - POSTGRES_USER=AUser - POSTGRES_PASSWORD=APassword - POSTGRES_DB=lemmy volumes: - ./volumes/postgres:/var/lib/postgresql/data restart: always
lemmy.hjson:
spoiler
{ # for more info about the config, check out the documentation # https://join-lemmy.org/docs/en/administration/configuration.html # only few config options are covered in this example config setup: { # username for the admin user admin_username: "AUser" # password for the admin user admin_password: "APassword" # name of the site (can be changed later) site_name: "Bulwark of Boredom" } opentelemetry_url: "http://otel:4317" # the domain name of your instance (eg "lemmy.ml") hostname: "lemmy.bulwarkob.com" # address where lemmy should listen for incoming requests bind: "0.0.0.0" # port where lemmy should listen for incoming requests port: 8536 # Whether the site is available over TLS. Needs to be true for federation to work. # JatNote: I was advised that this is not necessary. It does work without it. # tls_enabled: true # pictrs host pictrs: { url: "http://pictrs:8080/" # api_key: "API_KEY" } # settings related to the postgresql database database: { # name of the postgres database for lemmy database: "lemmy" # username to connect to postgres user: "aUser" # password to connect to postgres password: "aPassword" # host where postgres is running host: "postgres" # port where postgres can be accessed port: 5432 # maximum number of active sql connections pool_size: 5 } }
The following nginx.conf is for the internal proxy, which is included in the docker-compose.yml This is entirely separate from Nginx-Proxy-Manager (NPM)
nginx.conf:
spoiler
worker_processes 1; events { worker_connections 1024; } http { upstream lemmy { # this needs to map to the lemmy (server) docker service hostname server "lemmy:8536"; } upstream lemmy-ui { # this needs to map to the lemmy-ui docker service hostname server "lemmy-ui:1234"; } server { # this is the port inside docker, not the public one yet listen 80; # change if needed, this is facing the public web server_name localhost; server_tokens off; gzip on; gzip_types text/css application/javascript image/svg+xml; gzip_vary on; # Upload limit, relevant for pictrs client_max_body_size 20M; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; # frontend general requests location / { # distinguish between ui requests and backend # don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top set $proxpass "http://lemmy-ui"; if ($http_accept = "application/activity+json") { set $proxpass "http://lemmy"; } if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") { set $proxpass "http://lemmy"; } if ($request_method = POST) { set $proxpass "http://lemmy"; } proxy_pass $proxpass; rewrite ^(.+)/+$ $1 permanent; # Send actual client IP upstream proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # backend location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) { proxy_pass "http://lemmy"; # proxy common stuff proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Send actual client IP upstream proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } }
The nginx-proxy-manager container only needs to be in the same container network as the internal nginx:1-alpine container from the stack.
You need to create a proxy host for http port 80 to the IP address of the internal nginx:1-alpine container on the lemmyexternalproxy network in docker. Include the websockets support option.
https://lemmy.bulwarkob.com/pictrs/image/55870601-fb24-4346-8a42-bb14bb90d9e8.png
Then, you can use the SSL tab to do your cert and such. NPM is free to work on other networks with other containers as well, as far as I know.
hey @Jattatak, you seem to be the only other person I can find who is facing similar troubles to myself when trying to set up a lemmy instance. I’ve redone my docker-compose, nginx.conf, and lemmy.hjson to be exactly the same as yours (with some changes in the password / domain name). I’m also running an nginx proxy manager container.
However, it seems I’m still having the same problem of being able to see post content but not comments in other instances. I have the added problem of when trying to post a comment on my instance, the form freezes until I refresh the page. The comment does actually get posted.
I’ve also made sure the ‘lemmyinternal’ network is not isolated. I wonder did you manage to do anything to trouble shoot this issue? are there any ports I need to open on my firewall beyond 80, 443?
Most likely it is a nginx reverse-proxy issue. I would recommend to get rid of the nginx in the docker-compose if you still have that and directly proxy the Lemmy backend and Lemmy-ui via the system Nginx in a similar fashion to the Ansible script nginx example.
But it’s really hard to do “remote” setup support like this, so you will have to experiment a bit yourself.
I am not an NGINX expert by any means. The instance is reachable to the lemmy-ui via the proxy. I can “Sign up” and search for communities and such, but it seems like the backend is failing. Maybe an issues between lemmy and postgres?
More likely a websocket failure. I heard from another project that uses websockets for the frontend to communicate with the backend that Nginx proxy manager seems to have issues with websockets even if they are enabled via that toggle in the UI. But no real idea what the issue might be.
I hear issues with Nginx proxy manager all the time, but obviously it attracts a certain type of user, so it might not be the tool’s fault after all.
How did you setup your NGINX proxy? Can you post your NGINX config file as well as your docker-compose.yml file?
Just posted
Yea Ill do a write up in a bit with everything I can share that helped. Ill post it under the original thread. Not sure if I can sticky things but Ill try.
Did you ever get this up and running? I am also using NPM on top of the nginx in the stack, and I can’t seem to federate with lemmy.ml
Would love to know if you found a fix that could work for me
I got mine all squared away (I hope), if you still need assistance.
Hey, I do.
Everything is working amazing, except I can’t subscribe to anything on lemmy.ml.
What did you do to fix yours?
Im actually making my little write up right now. Will post under the root thread shortly
Just posted
I couldn’t tell you what the problem was, but using your configs with my parameters fixed it! Here I am posting on lemmy.ml from my own instance! Thank you very much
All I can say is you just got federated! Pay it forward or something altruistic like that.
hi, can you post your docker-compose.yaml, nginx config and screenshots/logs of failures?
(1/2) Alright, thanks for helping.
docker-compose.yml
spoiler
version: "3.3" networks: # communication to web and clients lemmyexternalproxy: # communication between lemmy services lemmyinternal: driver: bridge internal: true services: lemmy: image: dessalines/lemmy # this hostname is used in nginx reverse proxy and also for lemmy ui to connect to the backend, do not change hostname: lemmy networks: - lemmyinternal restart: always environment: - RUST_LOG="warn,lemmy_server=debug,lemmy_api=debug,lemmy_api_common=debug,lemmy_api_crud=debug,lemmy_apub=debug,lemmy_db_schema=debug,lemmy_db_views=debug,lemmy_db_views_actor=debug,lemmy_db_views_moderator=debug,lemmy_routes=debug,lemmy_utils=debug,lemmy_websocket=debug" - RUST_BACKTRACE=full volumes: - ./lemmy.hjson:/config/config.hjson:Z depends_on: - postgres - pictrs lemmy-ui: image: dessalines/lemmy-ui # use this to build your local lemmy ui image for development # run docker compose up --build # assuming lemmy-ui is cloned besides lemmy directory # build: # context: ../../lemmy-ui # dockerfile: dev.dockerfile networks: - lemmyinternal environment: # this needs to match the hostname defined in the lemmy service - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536 # set the outside hostname here - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236 - LEMMY_HTTPS=false - LEMMY_UI_DEBUG=true depends_on: - lemmy restart: always pictrs: image: asonix/pictrs:0.4.0-beta.19 # this needs to match the pictrs url in lemmy.hjson hostname: pictrs # we can set options to pictrs like this, here we set max. image size and forced format for conversion # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp networks: - lemmyinternal environment: - PICTRS_OPENTELEMETRY_URL=http://otel:4137 - PICTRS__API_KEY=API_KEY - RUST_LOG=debug - RUST_BACKTRACE=full - PICTRS__MEDIA__VIDEO_CODEC=vp9 - PICTRS__MEDIA__GIF__MAX_WIDTH=256 - PICTRS__MEDIA__GIF__MAX_HEIGHT=256 - PICTRS__MEDIA__GIF__MAX_AREA=65536 - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400 user: 991:991 volumes: - ./volumes/pictrs:/mnt:Z restart: always postgres: image: postgres:15-alpine # this needs to match the database host in lemmy.hson # Tune your settings via # https://pgtune.leopard.in.ua/#/ # You can use this technique to add them here # https://stackoverflow.com/a/30850095/1655478 hostname: postgres command: [ "postgres", "-c", "session_preload_libraries=auto_explain", "-c", "auto_explain.log_min_duration=5ms", "-c", "auto_explain.log_analyze=true", "-c", "track_activity_query_size=1048576", ] networks: - lemmyinternal # adding the external facing network to allow direct db access for devs - lemmyexternalproxy ports: # use a different port so it doesnt conflict with potential postgres db running on the host - "5433:5432" environment: - POSTGRES_USER=noUsrHere - POSTGRES_PASSWORD=noPassHere - POSTGRES_DB=noDbHere volumes: - ./volumes/postgres:/var/lib/postgresql/data:Z restart: always
The NGINX I am using is not the one that came with the stack, but a separate single container for nginx-proxy-manager. I did not customize the conf that it installed with, and only used the UI to set up the proxy host and SSL, both of which are working (front end, at least.). The config seems to be unrelated on this, however I can share it if the rest of the information below is not enough.
nginx config and lemmy.hjson would be useful as well
Sure thing. lemmy.hjson:
spoiler
{ # for more info about the config, check out the documentation # https://join-lemmy.org/docs/en/administration/configuration.html # only few config options are covered in this example config setup: { # username for the admin user admin_username: "noUsrHere" # password for the admin user admin_password: "noPassHere" # name of the site (can be changed later) site_name: "Bulwark of Boredom" } # the domain name of your instance (eg "lemmy.ml") hostname: "lemmy.bulwarkob.com" # address where lemmy should listen for incoming requests bind: "0.0.0.0" # port where lemmy should listen for incoming requests port: 8536 # Whether the site is available over TLS. Needs to be true for federation to work. tls_enabled: true # pictrs host pictrs: { url: "http://pictrs:8080/" api_key: "API_KEY" } # settings related to the postgresql database database: { # name of the postgres database for lemmy database: "noDbHere" # username to connect to postgres user: "noUsrHere" # password to connect to postgres password: "noPassHere" # host where postgres is running host: "postgres" # port where postgres can be accessed port: 5432 # maximum number of active sql connections pool_size: 5 } }
I am not certain if I am somehow getting the wrong location of the config in the container. There is no volume or link for a conf file from host:container, so I am just grabbing from the default area /etc /nginx/nginx.conf:
spoiler
# run nginx in foreground daemon off; pid /run/nginx/nginx.pid; user npm; # Set number of worker processes automatically based on number of CPU cores. worker_processes auto; # Enables the use of JIT for regular expressions to speed-up their processing. pcre_jit on; error_log /data/logs/fallback_error.log warn; # Includes files with directives to load dynamic modules. include /etc/nginx/modules/*.conf; events { include /data/nginx/custom/events[.]conf; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; server_tokens off; tcp_nopush on; tcp_nodelay on; client_body_temp_path /tmp/nginx/body 1 2; keepalive_timeout 90s; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; ssl_prefer_server_ciphers on; gzip on; proxy_ignore_client_abort off; client_max_body_size 2000m; server_names_hash_bucket_size 1024; proxy_http_version 1.1; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_cache off; proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m; proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m; log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"'; log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"'; access_log /data/logs/fallback_access.log proxy; # Dynamically generated resolvers file include /etc/nginx/conf.d/include/resolvers.conf; # Default upstream scheme map $host $forward_scheme { default http; } # Real IP Determination # Local subnets: set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; # Includes Docker subnet set_real_ip_from 192.168.0.0/16; # NPM generated CDN ip ranges: include conf.d/include/ip_ranges.conf; # always put the following 2 lines after ip subnets: real_ip_header X-Real-IP; real_ip_recursive on; # Custom include /data/nginx/custom/http_top[.]conf; # Files generated by NPM include /etc/nginx/conf.d/*.conf; include /data/nginx/default_host/*.conf; include /data/nginx/proxy_host/*.conf; include /data/nginx/redirection_host/*.conf; include /data/nginx/dead_host/*.conf; include /data/nginx/temp/*.conf; # Custom include /data/nginx/custom/http[.]conf; } stream { # Files generated by NPM include /data/nginx/stream/*.conf; # Custom include /data/nginx/custom/stream[.]conf; } # Custom include /data/nginx/custom/root[.]conf;
it seems there is no config for lemmy nginx here… might be in other files?
I may be mistaken in my choice of proceeding, but as many are reporting, the install guide provided docker-compose and general docker instructions dont quite seem to work as expected. I have been trying to piecemeal this together, and the Included lemmy nginx service container was completely excluded (edited out/deleted) once I had the standalone nginx-proxy-manager setup and working for regular 80,443 ->1234 proxy requests to the lemmy-ui container.
Does the lemmy nginx have a specific role or tie in? I am still fairly new to reverse proxying in general.
yeah, nginx config for lemmy is not very straighforward. you need to mimic this:
worker_processes 1; events { worker_connections 1024; } http { upstream lemmy { server "lemmy:8536"; } upstream lemmy-ui { server "lemmy-ui:1234"; } server { listen 1236; server_name localhost; # frontend location / { set $proxpass "http://lemmy-ui"; if ($http_accept = "application/activity+json") { set $proxpass "http://lemmy"; } if ($http_accept = "application/ldr+json; profile=\"https://www.w3.org/ns/activitystreams\"") { set $proxpass "http://lemmy"; } if ($request_method = POST) { set $proxpass "http://lemmy"; } proxy_pass $proxpass; rewrite ^(.+)/+$ $1 permanent; # Send actual client IP upstream proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # backend location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) { proxy_pass "http://lemmy"; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Add IP forwarding headers proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } }
also - can you check if all containers are running? just do
docker-compose ps
in the lemmy dir.All containers are running. I handle them with Portainer, though I build the stack from the CLI in the lemmy dir, so Portainer cant fully manage them. Reboots and logs and networking and such work fine though.
As for the nginx config, the nginx proxy manager I use currently has all proxy-host/settings setup from the webGUI, where I use the GUI to set up the proxy host information and SSL information. I did no manual edits to any configurations or settings of the container during or after compose. Only GUI actions. When looking at the nginx.conf I replied with here (my current conf), I do not see anything related to that proxy host I created from the GUI. I am not sure if that is normal or not, or if I maybe have a wrong .conf included here.
With that in mind, would you suggest I simply overwrite and/or add your snippet to my existing conf file?