The problem I want to solve

I’m using my computer for development activities and sometimes it requires to run extra services like databases, Kafka, Redis, etc. With containers it’s become easy to deploy anything, but my laziness points over it. I don’t even want to hassle by starting containers.

What I’ve achieved with the steps below, that systemd automatically starts the service in the moment when I start to use its port. And if I don’t use it for a while, it automatically stop it as well.

There are other products that can solve this function, but I was amazed when I’ve figured out that I am also able to do it with systemd, without any extra software.

My environment

I’ve done things below, on the following environment.

$ uname -a
Linux atipc 6.15.6-200.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jul 10 15:22:32 UTC 2025 x86_64 GNU/Linux
$ cat /etc/os-release
NAME="Fedora Linux"
VERSION="42 (KDE Plasma Desktop Edition)"
RELEASE_TYPE=stable
ID=fedora
VERSION_ID=42
VERSION_CODENAME=""
PLATFORM_ID="platform:f42"
PRETTY_NAME="Fedora Linux 42 (KDE Plasma Desktop Edition)"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:42"
DEFAULT_HOSTNAME="fedora"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f42/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=42
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=42
SUPPORT_END=2026-05-13
VARIANT="KDE Plasma Desktop Edition"
VARIANT_ID=kde
$ systemctl --version
systemd 257 (257.7-1.fc42)
+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +BTF +XKBCOMMON +UTMP +SYSVINIT +LIBARCHIVE
$ podman version
Client:        Podman Engine
Version:       5.5.2
API Version:   5.5.2
Go Version:    go1.24.4
Git Commit:    e7d8226745ba07a64b7176a7f128e4ef53225a0e
Built:         Tue Jun 24 02:00:00 2025
Build Origin:  Fedora Project
OS/Arch:       linux/amd64

💡 Tip

Systemd commands can be long, so I’m using aliases to make typing faster.

alias usta='systemctl --user start'
alias usto='systemctl --user stop'
alias ures='systemctl --user restart'
alias ustat='systemctl --user status'
alias ureload='systemctl --user daemon-reload'
alias ucat='systemctl --user cat'
alias utimer='systemctl --user list-timers'
alias usvc='systemctl --user --type=service'
alias uena='systemctl --user enable --now'
alias udis='systemctl --user disable --now'

alias ssta='systemctl start'
alias ssto='systemctl stop'
alias sres='systemctl restart'
alias sstat='systemctl status'
alias sreload='systemctl daemon-reload'
alias scat='systemctl cat'
alias stimer='systemctl list-timers'
alias ssvc='systemctl --type=service'
alias sena='systemctl enable --now'
alias sdis='systemctl disable --now'

The idea

My idea is to have a loop back IP address (10.0.0.1) that is available only locally. This is the interface that I’ll use to reach my services behind the port.

I’ve already told about how systemd can pass file descriptors behind sockets in the Unix Domain Socket topic. TLDR; Systemd has a function that it creates the socket and pass the file descriptor behind it to the application. Sounds good, but not all application support this method. I want to have a generic solution, so this is not enough.

During this post, I’m explaining an example implementation for this, I’ll implement a PostgreSQL database and reach it casually.

[ client ]
   ↓  10.0.0.1:5432
[ systemd socket ]
[ systemd socket proxy ]
   ↓  127.0.0.1:5432
[ container ]

I’ll also implement a feature if no incoming connection in the last 30 seconds, then database is just stopped.

Implementation

In the following points, I’ll discuss every point that I’ve done and how and why during the implementation.

Add developer IP

I’ve called IP address, where my socket files are bind as “developer IP”. In my case it is 10.0.0.1 and this is a loop back device which means it only available locally like 127.0.0.1. We can create new address with the following command:

$ sudo ip addr add 10.0.0.1/32 dev lo
$ ip -br a
lo               UNKNOWN        127.0.0.1/8 10.0.0.1/32 ::1/128
enp3s0           UP             192.168.1.68/24 2001:4c4e:2258:8000:9d5e:637b:56dd:823b/64 fe80::2d83:5113:c201:f899/64

But this is just temporary, when system is rebooted, this address disappear. I’ve solved this problem, but put this command into a systemd unit file. I’ve create a file at /usr/lib/systemd/system/dev-ip.service location, with the following content.

Besides the command previously, it first check if the interface already defined, so the unit becomes idempotent.

[Unit]
Description=Assign 10.0.0.1 to loopback interface (if not already)
DefaultDependencies=no
Before=network.target
Wants=network.target

[Service]
Type=oneshot
ExecStart=/bin/bash -c "ip addr show dev lo | grep -q '10.0.0.1/32' || ip addr add 10.0.0.1/32 dev lo"
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Only actions left is to enable the service and activate it.

$ sena dev-ip.service
$ sstat dev-ip.service
● dev-ip.service - Assign 10.0.0.1 to loopback interface (if not already)
     Loaded: loaded (/usr/lib/systemd/system/dev-ip.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: active (exited) since Sun 2025-07-20 13:18:17 CEST; 41min ago
 Invocation: ecb9da2890794aad90be08aaa5d7f2b0
    Process: 702 ExecStart=/bin/bash -c ip addr show dev lo | grep -q '10.0.0.1/32' || ip addr add 10.0.0.1/32 dev lo (code=exited, status=0/SUCCESS)
   Main PID: 702 (code=exited, status=0/SUCCESS)
   Mem peak: 2.6M
        CPU: 9ms

Great, we have an interface where we can bind socket.

But why do we need this extra interface

Great question, we could bind the systemd socket to 0.0.0.0 or machine own IP address or 127.0.0.1. But I’ve reason why I’ve done this:

  • I want to setup the same port to systemd socket and port that is exposed on container. So I can’t use 0.0.0.0 because it would fail, because 0.0.0.0 means all interface (more or less).
  • Because I will expose the database port on 127.0.0.1:5432, I can’t use that once again.
  • Using my machine’s IP is not a good solution because it is behind DHCP, so it does not have fix IP

So, my option left is to create a new interface and basically make proxy between 10.0.0.1:5432 and 127.0.0.1:5432. I want to use the same port because of simplicity and later, I can script it easier.

But before we talk about the proxy, let’s see how the systemd socket looks like.

Create systemd socket

I create a user systemd unit called: ~/.config/systemd/user/db-proxy.socket. Content of this file is the following.

[Unit]
Description=Start PostgreSQL container on demand

[Socket]
ListenStream=10.0.0.1:5432

[Install]
WantedBy=sockets.target

I enable this unit, then I can see that it is already started to listen. This also makes it persistent via system restarts.

$ uena db-proxy.socket
$ ustat db-proxy.socket
● db-proxy.socket - Start PostgreSQL container on demand
     Loaded: loaded (/home/ati/.config/systemd/user/db-proxy.socket; enabled; preset: disabled)
     Active: active (listening) since Sun 2025-07-20 13:18:35 CEST; 1h 7min ago
 Invocation: 1be3926e0b6a43e9a927fbf54f684bd0
   Triggers: ● db-proxy.service
     Listen: 10.0.0.1:5432 (Stream)
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/db-proxy.socket

júl 20 13:18:35 atipc systemd[1653]: Listening on db-proxy.socket - Start PostgreSQL container on demand.
$ sudo netstat -plnt | grep 5432
tcp        0      0 10.0.0.1:5432           0.0.0.0:*               LISTEN      1653/systemd

Create proxy service

If any traffic is coming to db-proxy.socket, it automatically start db-proxy.service and pass the file descriptor. Unfortunately, not everything support to receive this file descriptor and handle.

This is when systemd-socket-proxyd can be used. This command is able to use that file descriptor and can proxy the traffic to another port or even to a Unix Domain Socket. This command also has a parameter --exit-idle-time which stop this service if no connection for specific time interval.

📣 Important

The --exit-idle-time function is available since version 246. You can check your systemd version with systemctl --version command.

In the [Unit] section, I’ve setup some dependency. Because of Requires=db.service, the database service (which generated from db.container Quadlet), also started. If the socket would not run, that is also started.

I’ve also put a 1 second sleep before the proxy start command. This is because sometimes, my first connection has failed, because proxy send over traffic too earlier, before database would have been initialized. This introduce a small delay at the first connection. Rest of the connections are going smooth.

[Unit]
Requires=db.service
After=db.service
Requires=db-proxy.socket
After=db-proxy.socket

[Service]
ExecStartPre=/bin/sleep 1
ExecStart=/usr/lib/systemd/systemd-socket-proxyd --exit-idle-time=30s 127.0.0.1:5432

Create database service

In the ~/.config/containers/systemd/db.container file, I’ve created a service for a PostgreSQL container.

This is not a big deal, a simple file. The only extra is the BindsTo=db-proxy.service. This results, if the proxy service stop (because of a stop or idle time), database also invokes a stop.

[Unit]
Description=PostgreSQL database
BindsTo=db-proxy.service

[Container]
Image=docker.io/library/postgres:16
AutoUpdate=registry

# Network
PublishPort=127.0.0.1:5432:5432

# Volume
Volume=db.volume:/var/lib/postgresql/data

# Environment variables
Environment=POSTGRES_USER=ati
Environment=POSTGRES_DB=atidb
Secret=db-pw,type=env,target=POSTGRES_PASSWORD

# Health monitoring
HealthCmd=pg_isready -d $POSTGRES_DB -U $POSTGRES_USER
HealthInterval=30s
HealthTimeout=15s
HealthRetries=10
HealthStartPeriod=15s

# Other
UserNS=keep-id:uid=999,gid=999

[Service]
Restart=on-failure
RestartSec=5
StartLimitBurst=5

See in practice

Now, I’m done with preparation, let’s see how it is working in practice.

# Only just systemd listen on 10.0.0.1
$ sudo netstat -plnt | grep 5432
tcp        0      0 10.0.0.1:5432           0.0.0.0:*               LISTEN      1653/systemd
#
# Now make a connection to systemd socket
# After a short time it is connected
$ psql -U ati -d atidb -W -h 10.0.0.1
Password:
psql (16.9)
Type "help" for help.

atidb=# \l
                                                   List of databases
   Name    | Owner | Encoding | Locale Provider |  Collate   |   Ctype    | ICU Locale | ICU Rules | Access privileges
-----------+-------+----------+-----------------+------------+------------+------------+-----------+-------------------
 atidb     | ati   | UTF8     | libc            | en_US.utf8 | en_US.utf8 |            |           |
 postgres  | ati   | UTF8     | libc            | en_US.utf8 | en_US.utf8 |            |           |
 template0 | ati   | UTF8     | libc            | en_US.utf8 | en_US.utf8 |            |           | =c/ati           +
           |       |          |                 |            |            |            |           | ati=CTc/ati
 template1 | ati   | UTF8     | libc            | en_US.utf8 | en_US.utf8 |            |           | =c/ati           +
           |       |          |                 |            |            |            |           | ati=CTc/ati
(4 rows)

atidb=# \q
#
# Now, we can see that 127.0.0.1 is also available with 5432 port
$ sudo netstat -plnt | grep 5432
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      17051/pasta
tcp        0      0 10.0.0.1:5432           0.0.0.0:*               LISTEN      1653/systemd
#
# Because the database container is running
$ podman ps
CONTAINER ID  IMAGE                          COMMAND     CREATED         STATUS                   PORTS                     NAMES
e7ee2d000fe5  docker.io/library/postgres:16  postgres    20 seconds ago  Up 20 seconds (healthy)  127.0.0.1:5432->5432/tcp  systemd-db
#
# Wait 30 seconds, and we can see that database has been stopped
$ sleep 30 && podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
#
# 5432 port is only available via 10.0.0.1 interface again
$ sudo netstat -plnt | grep 5432
tcp        0      0 10.0.0.1:5432           0.0.0.0:*               LISTEN      1653/systemd

Caveats

During working out this solution, I’ve found with some caveats that I’m mind to share.

Be aware for post usage

If you run multiple database, then be aware for port collision. I’ve used the 127.0.0.1:5432 port on this PostgreSQL database. If I want another one, then it must be bind to a different port, or take care that they are never run in the same time.

Use BindsTo instead of StopWhenUnneeded

The StopWhenUnneeded also sounds good, and this was my first approach to implement the database automatic shutdown.

Takes a boolean argument. If true, this unit will be stopped when it is no longer used. Note that, in order to minimize the work to be executed, systemd will not stop units by default unless they are conflicting with other units, or the user explicitly requested their shut down. If this option is set, a unit will be automatically cleaned up if no other active unit requires it. Defaults to false.

Reasons I’ve used BindsTo instead:

  • In the container I’ve used a volume, called db.volume. The generated db-volume.service, it has a require dependency against db.service.
  • Since not just db-proxy.service has dependency, thus database does not stop when this service does.

Improvements for future

Automation

It is working now, but I have to define the socket and proxy services manually. I’ve made this double interface trick to use the same port, to make it easier to generate those files based on Quadlet units (container and pod).

Because it is easy to done via a template, and no hassle with ports, this can be done quickly.

Is it good idea to use in production

I’m using this method on my developer machine. This is not a production server, and I haven’t tested to put a complex service behind this socket+proxy combination. In theory, that could work, but there can be some unseen issues that has to be solved. So, I can’t answer this question yet.

Final words

I’ve made a solution to solve one my problem, so I can be even lazier in the future than now. I didn’t know that I can integrate systemd socket activation with Podman Quadlet, but it has amazed me. Systemd is a very nice and handy tool!