What is Docker and Podman?

They are both container manager tool. The key difference that Docker is an “all-on-one” solution, and Podman is more exploit the existing features (e.g.: systemd) of the system. Docker requires root privileges to be able to run (since it is a daemon dockerd). Podman convert the current process into the container process, instead of asking a daemon to fork a new process. This difference also explain that Podman needs something to start/stop it, for example after reboot, and here is where systemd is coming in the discussion.

In this article I describe how I changed my docker compose files to quadlet unit files to achieve the same settings. This is not a full description about all of my services that I host but I did the same with others and convert them to rootless containers.

Concept of my services

I use CF tunnel in front of my services. The figure below represent my docker setup.

  • CF tunnel is running and its own network
  • All service components are running in separated network (for example application+database)
  • Some application within services can connect for CF tunnel network
  • Every single service has its own IP address
graph TD A(Web) -->|Incoming traffic| B(CF tunnel) B --> AA B --> BA B --> CA subgraph ServiceA AA(API/Web) --> AB(Database) AA --> AC(Cache) end subgraph ServiceB BA(API/Web) --> BB(Database) BA --> SBC(Cache) end subgraph ServiceC CA(Gateway) --> CB(API #1) CB --> CBD1(Database #1) CA --> CC(API #2) CC --> CCD1(Database #1) CA --> CD(API #3) CD --> CDD1(Database #1) end

Start the migration

I have chosen the quadlet solution from Podman. Via quadlet, we can make unit files (*.container, *.volume, *.pod, *.network, etc.) and when we reload the systemd then it generate unit files that execute Podman commands and bring up services, networks, volumes.

I use Rocky Linux with the following Podman version:

$ podman version
Client:       Podman Engine
Version:      5.2.2
API Version:  5.2.2
Go Version:   go1.22.9 (Red Hat 1.22.9-2.el9_5)
Built:        Tue Feb  4 13:54:01 2025
OS/Arch:      linux/amd64

Prepare CF tunnel

The systemd reads unit files, on user level, from ~/.config/containers/systemd directory. Here I have created a file called tunnel.container:

[Unit]
Description=Cloudflare tunnel

[Install]
WantedBy=default.target

[Container]
Image=docker.io/cloudflare/cloudflared:latest
Exec=tunnel --no-autoupdate run
AutoUpdate=registry

# Network settings
Network=tunnel.network

# Environment variables
Secret=tunnel-token,type=env,target=TUNNEL_TOKEN

# Other
UserNS=keep-id

[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

Let’s run through what are these mean. The [Unit] part gives a description about it. It would popup on different systemctl display command. The [Install] contains the WantedBy=default.target, which is a very important line. Because of this line, the container would start after a reboot. The [Service] part describes when and how it can be restarted. They are standards systemd settings.

The more interesting part, at least from this article view, is in [Container] section. Here we define the settings what and how we want to bring up the container. This is a very basic and simple container, it does not have too much customization requirements. Later, we connect tunnel.network to this container. This is a separated network so it can be assigned for other container/pod too. Content of the tunnel.network:

[Network]
Label=app=tunnel
Driver=bridge
DNS=1.1.1.1
DNS=1.0.0.1

And at the end, there is a secret which is mounted as environment variable. This secret is created before these unit files would be loaded, by the following command:

op read "op://server/secret-tunnel-token/" | podman secret create tunnel-token -

📝 Note

I have used 1Password CLI in snippet above. If you want to avoid this CLI, then alternatively:

echo -n "my-secret-token" | podman secret create tunnel-token -

It is also worth to mention, if you type sensitive information on console, then press at least one space in front if it, so it won’t be visible in the shell history.

Bring up CF tunnel

By using systemctl --user daemon-reload command unit files are generated. If incorrect, then warning messages are written in journal (check journalctl --user). After it, container can be started by using systemctl --user start tunnel.service command that is generated based on container file during reload.

$ systemctl --user status tunnel
○ tunnel.service - Cloudflare tunnel #
     Loaded: loaded (/home/ati/.config/containers/systemd/tunnel.container; generated)
     Active: inactive (dead)
$ systemctl --user start tunnel
$ systemctl --user status tunnel
● tunnel.service - Cloudflare tunnel #
     Loaded: loaded (/home/ati/.config/containers/systemd/tunnel.container; generated)
     Active: active (running) since Mon 2025-03-03 23:22:57 UTC; 3s ago
   Main PID: 172950 (conmon)
      Tasks: 9 (limit: 22552)
     Memory: 25.0M
        CPU: 709ms
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/tunnel.service
             ├─libpod-payload-df2910bf9670b63fe912ae65c083e1f84ae710ba24bf2f6b7ce04cb01e4336c0
             │ └─172953 cloudflared --no-autoupdate tunnel --no-autoupdate run
             └─runtime
               └─172950 /usr/bin/conmon --api-version 1 -c df2910bf9670b63fe912ae65c083e1f84ae710ba24bf2f6b7ce04cb01e4336c0 -u df2910bf9670b63fe912ae65c083e1f84ae710ba24b>

Mar 03 23:22:57 controller-01 systemd-tunnel[172950]: 2025-03-03T23:22:57Z INF Starting metrics server on [::]:20241/metrics

You must implement the persistent journal then you can also filter for unit in user journal as well using journalctl --user -u tunnel command.

If we take a look for the generated unit file, we can see some interesting thing:

# /run/user/1000/systemd/generator/tunnel.service
# Automatically generated by /usr/lib/systemd/user-generators/podman-user-generator

[Unit]
Wants=network-online.target
After=network-online.target             <==== Automatically set network as dependency
Description=Cloudflare tunnel
SourcePath=/home/ati/.config/containers/systemd/tunnel.container
RequiresMountsFor=%t/containers
Requires=tunnel-network.service         <==== Automatically set any further unit file as dependency
After=tunnel-network.service

[Install]
WantedBy=default.target

[X-Container]
Pod=tunnel.pod
Image=docker.io/cloudflare/cloudflared:latest
Exec=tunnel --no-autoupdate run
AutoUpdate=registry

# Network settings
Network=tunnel.network

# Environment variables
Secret=tunnel-token,type=env,target=TUNNEL_TOKEN

# Other
UserNS=keep-id

[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

Environment=PODMAN_SYSTEMD_UNIT=%n
KillMode=mixed
ExecStop=/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
ExecStopPost=-/usr/bin/podman rm -v -f -i --cidfile=%t/%N.cid
Delegate=yes
Type=notify
NotifyAccess=all
SyslogIdentifier=%N
ExecStart=/usr/bin/podman run --name=systemd-%N --cidfile=%t/%N.cid --replace --rm --cgroups=split --network=systemd-tunnel --sdnotify=conmon -d --userns keep-id --label

Migrate docker compose file to quadlet

In this scenario, I show how I moved my Keycloak instance from Docker to Quadlet. Basically there are two approach to migrate one thing to another:

  1. Recreate the same architecture that you had in the past
  2. Understand what you have done the past any why and, instead of fully re-create everything, use the new features and move just the functionality

I always preferred the 2nd option, just like now. This is a more difficult thing, in my opinion, but on long term everything will be just more nature. Here is my docker compose file for Keycloak.

version: "3.9"

services:
  keycloak:
    image: quay.io/keycloak/keycloak:latest
    command: start
    env_file:
      - .env
    networks:
      - default
      - outside
  db:
    image: postgres:16
    env_file:
      - .env
    volumes:
      - ../db:/var/lib/postgresql/data
    networks:
      - default
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
      interval: 10s
      timeout: 15s
      retries: 10
      start_period: 5s

networks:
  default:
  outside:
    external: true
    name: outside

What will be different compare with Docker compose:

  • Network: I do not want to create a network for each service (e.g.: keycloak.network file) to allow communication between containers. Instead, I put both container into one pod. A pod shares the same network namespace and storage. So Keycloak container can reach the database on 127.0.0.1 address.
  • Environment variables: I will include them into unit file, for sensitive information using secret like above for tunnel token.
  • Database command: Because of first point, the tunnel network is attached for the whole pod, it means that database would also be available from CF tunnel, that I don’t want. So PostgreSQL database must listen on 127.0.0.1 instead of 0.0.0.0 address.

Prepare unit files

First, a pod unit file must be created. I have created it with keycloak.pod name:

[Pod]
PodName=keycloak
Network=tunnel.network     <== Network always attached for pod and not for the container
NetworkAlias=keycloak-nw   <== The infra container name is the default, it is easier to remember when I set the CF tunnel config

Second, database unit file is created. Its name is keycloak-db.container:

[Unit]
Description=PostgreSQL database for Keycloak

[Container]
Pod=keycloak.pod  <=== Point which pod unit file it belongs
Image=docker.io/postgres:16
Exec=postgres -c listen_addresses=127.0.0.1    <=== Here it is set to listen on 127.0.0.1

# Persistent volumes
Volume=%h/container_data/keycloak/db:/var/lib/postgresql/data

# Environment variables
Environment=POSTGRES_USER=db_user
Environment=POSTGRES_DB=keycloak
Secret=keycloak-db-password,type=env,target=POSTGRES_PASSWORD  <=== Use podman secret for sensitive data

# Health monitoring
HealthCmd=pg_isready -d $POSTGRES_DB -U $POSTGRES_USER
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s
UserNS=keep-id

[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5


[Install]
WantedBy=default.target   <=== Start after reboot

And at the end, here is the keycloak-app.container file:

[Unit]
Description=Keycloak application
Wants=keycloak-db.service   <=== Set that database is needed when start

[Container]
Pod=keycloak.pod  <=== Point which pod unit file it belongs
Image=quay.io/keycloak/keycloak:latest
Exec=start

# Environment variables
Environment=KC_DB=postgres
Environment=KC_DB_USERNAME=db_user
Environment=KC_DB_URL_DATABASE=keycloak
Environment=KC_DB_URL_HOST=127.0.0.1  <=== Because they are in one pod, they share network so no need for addition network between app and database
Environment=KC_DB_URL_PORT=5432
Environment=KC_HOSTNAME_STRICT=false
Environment=KC_HOSTNAME=https://auth.example.com
Environment=KC_HTTP_ENABLED=true
Environment=KC_HEALTH_ENABLED=true
Environment=KEYCLOAK_ADMIN=InitialAdmin
Secret=keycloak-db-password,type=env,target=KC_DB_PASSWORD <=== Use podman secret for sensitive data

# Other
UserNS=keep-id

[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5


[Install]
WantedBy=default.target <=== Start after reboot

After a daemon reload, unit files are created and ready to start.

systemctl --user daemon-reload
systemctl --user start keycloak-pod

With these settings, CF tunnel ingress would look like:

ingress:
  - hostname: auth.example.com
    service: http://keycloak-nw:8080
  - service: http_status:404

Final landscape

$ podman ps --pod --filter pod=keycloak
CONTAINER ID  IMAGE                                    COMMAND               CREATED       STATUS                 PORTS                         NAMES                 POD ID        PODNAME
86270d5a97da  localhost/podman-pause:5.2.2-1738677241                        24 hours ago  Up 24 hours                                          601798eb3f22-infra    601798eb3f22  keycloak
eb81b15afdd6  quay.io/keycloak/keycloak:latest         start                 24 hours ago  Up 24 hours            8080/tcp, 8443/tcp, 9000/tcp  systemd-keycloak-app  601798eb3f22  keycloak
12a1cb57c031  docker.io/library/postgres:16            postgres -c liste...  24 hours ago  Up 24 hours (healthy)  5432/tcp                      systemd-keycloak-db   601798eb3f22  keycloak
$ podman ps --filter name=tunnel
CONTAINER ID  IMAGE                                    COMMAND               CREATED       STATUS       PORTS       NAMES
ad66194f011e  docker.io/cloudflare/cloudflared:latest  tunnel --no-autou...  24 hours ago  Up 24 hours              systemd-tunnel
$ podman network ls
NETWORK ID    NAME              DRIVER
2f259bab93aa  podman            bridge
bf315acc902a  systemd-tunnel    bridge

End of article

Recommend to read

If you are more interested in this topic, here are some link that I advice to read:

Last words

This is how I started my journey to move my stuff to Podman from Docker using Quadlet. I know that podman-compose is a thing, but I have never really felt it to use as always working solution. I like the idea behind Quadlet, and I have already migrated all my services that I have hosted on Docker to Rootless Quadlet and all of them are working (It was not that much: Keycloak, Gitea and some self-made application). It was not easy, had to learn how things are working but at the end, it is working and I am satisfied with the result.