Have a few questions regarding my setup for production Keycloak environment running in Docker. Have it “working” but have several questions before I accept it as production ready…
Normally I have ben able to work through all issues but am getting a bit antsy on the time it’s taking me as I have already spent a day on this and is putting off a lot of other projects… (which will rely on this…)
NOTE: have removed https
from most configs due to this post (first in stack overflow) being detected as spam…
Below are the current setup and configs for the auth server at keycloak.tld.com – PRODUCTION
Nginx config
auth.conf
located at:
nano /etc/nginx/sites-available/auth.conf
Initial (pre-lets-encrypt run)
server {
server_name keycloak.tld.com;
access_log /var/log/nginx/auth.access.log;
error_log /var/log/nginx/auth.error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass https://localhost:8443;
}
}
After Lets Encrypt
server {
server_name keycloak.tld.com;
access_log /var/log/nginx/auth.access.log;
error_log /var/log/nginx/auth.error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass //localhost:8443;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/keycloak.tld.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/keycloak.tld.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = keycloak.tld.com) {
return 301 //$host$request_uri;
} # managed by Certbot
server_name keycloak.tld.com;
listen 80;
return 404; # managed by Certbot
}
Symbolic link creation:
ln -s /etc/nginx/sites-available/auth.conf /etc/nginx/sites-enabled/
I run Lets Encrypt and get the certs before the building of the images… (all good)
Docker setup
Dockerfile (according to Keycloak guide here Running Keycloak in a container) plus additions mainly in regards to parsing the ssl certs through…
# Currently 25.0.1 is the latest, just in case of breaking changes in future :latest, force base image updates to be specified
FROM quay.io/keycloak/keycloak:25.0.1 as builder
# Enable health and metrics support
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
# Database vendor Postgres
ENV KC_DB=postgres
# Copy the LetsEncrypt created SSL certs to container with correct permissions
WORKDIR /opt/keycloak
COPY --chown=1000:0 certs/fullchain.pem /opt/keycloak/fullchain.pem
COPY --chown=1000:0 certs/privkey.pem /opt/keycloak/privkey.pem
RUN /opt/keycloak/bin/kc.sh build
FROM quay.io/keycloak/keycloak:25.0.1
COPY --from=builder /opt/keycloak/ /opt/keycloak/
# Pointing to external postgres (container) shared database on pgres_default network
ENV KC_FEATURES=hostname:v2
ENV KC_DB_URL=jdbc:postgresql://postgresdb/keycloak
ENV KC_DB_USERNAME=cloak
ENV KC_DB_PASSWORD=some_super_strong_password
ENV KC_HOSTNAME=keycloak.tld.com
ENV KC_HTTPS_CERTIFICATE_FILE=/opt/keycloak/fullchain.pem
ENV KC_HTTPS_CERTIFICATE_KEY_FILE=/opt/keycloak/privkey.pem
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
# Not sure on below (attempt to simplify LetsEncrypt certs auto update by potentially copying pems to here... or just run a script to copy into container direct every few weeks...)
VOLUME keycloak-data:/opt/keycloak/
Run below (from folder Dockerfile is located in) to create container image
docker build . -t mysite-keycloak
Run the image with options
docker run --name mysite-keycloak --net pgres_default -p 8080:8080 -p 8443:8443 -p 9000:9000 -e KEYCLOAK_ADMIN=keyadmin -e KEYCLOAK_ADMIN_PASSWORD=super_strong_password -v keycloak-data mysite-keycloak start --optimized --verbose
Want to run from docker-compose.yml
instead… (which then uses the auth.env
for ENV
variables…)
networks:
pgres_default:
external: true
services:
keycloak:
image: quay.io/keycloak/keycloak:25.0.1 # Needs to be the image that is created from the Dockerfile
env_file: "auth.env"
restart: always
ports:
- 8080:8080
- 8443:8443
- 9000:9000
volumes:
- keycloak-data:/opt/keycloak
The auth.env
file
KEYCLOAK_ADMIN_USER: keyadmin
KEYCLOAK_ADMIN_PASSWORD: super_strong_password
KEYCLOAK_FRONTEND_URL="//keycloak.tld.com"
PROXY_ADDRESS_FORWARDING=true
THE ISSUES
NOTES Not getting errors and all pages of Keycloak are working… Questions are related to ensuring a “best practise” and secure production environment.
I have another running Keycloak server for a separate application that is quite old (v13…) and not willing to spend the huge amount of time troubleshooting both it and the original app (still in use) to update and use for the needs this one will be. It was set up by a former Employee using dokku which I am not that familiar with (other than fixing basic breaks on os updates) and not willing to add another environment to have to learn to attempt to setup in the same manner (its set up with github workflows for updating the connected apps via new releases on github so I am guessing he used dokku to assist with that…)
TO THE ISSUES
I run nginx from the main server side with a .conf
file for each running container, mapping the port required with proxy_pass //localhost:8443;
which, again, is working fine, at least in the other instances I have running (pgadmin4 and an ecommerce site)
So, by running docker build . -t mysite-keycloak
the image is created fine.
I am then able to start the container with docker run --name mysite-keycloak --net pgres_default -p 8080:8080 -p 8443:8443 -p 9000:9000 -e KEYCLOAK_ADMIN=keyadmin -e KEYCLOAK_ADMIN_PASSWORD=super_strong_password -v keycloak-data mysite-keycloak start --optimized --verbose
which starts fine and keycloak.tld.com
is reachable.
However, here is where my questions start…
First, when going to keycloak.tld.com
the browser appends the port (:8443
) to the domain… All pages of admin console works fine, but has the port appended… As my proxy networking knowledge is still a little hazy, I believe this is maybe due to the nginx .conf
file and how I have the proxy_pass //localhost:8443;
configured and not the config of the Dockerfile
and port mapping of the docker run
command?
Second, obviously I do not want to use docker run
and would instead prefer to use the docker-compose.yml
file. I have not yet found enough information on mapping image: quay.io/keycloak/keycloak:25.0.1
to the image that is created from docker build . -t mysite-keycloak
, ie; how do I reference that image? (Apart from I notice that using docker run...
means console needs to keep running for the container to keep running… There would be a way around this I assume but as I do not want to run in this manner I have not looked into this any further…)
Third, the SSL certs, is there a better way than copying the renewed certs into the container (hoping to just mount the volume
that Lets Encrypt stores the files into so that the container automatically sees any updates…)?
Fourth, should I map the ports differently (eg; 48443:8443
etc…) for more security and if so, how would this affect the nginx .conf
file setup? Any other stuff ups? This is for production environment so needs to be production ready…
Others: Is PROXY_ADDRESS_FORWARDING=true
necessary in the .env
file? Am I missing any headers etc in my NGINX conf?
Thanks in advance for all helpful responses!
rcwombat is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.