Service container persistent after device reboot?

We have an application on several containers. One container is launched with the following configuration:

  container_name: service-broadcaster
  image: 'IMAGE'
  network_mode: host
  privileged: true
    - DBUS_SYSTEM_BUS_ADDRESS=unix:path=/host/run/dbus/system_bus_socket
    io.balena.features.dbus: '1'
    io.balena.features.supervisor-api: '1'

The service logs to /service_broadcaster_${date}.log

When I reboot the device several times. The logs of previous runs seem to be present:

-rw-r--r--    1 root     root         997 Dec 20 10:20 service_broadcaster_2019-12-20-10-20-30.log
-rw-r--r--    1 root     root        2.4K Dec 20 10:25 service_broadcaster_2019-12-20-10-21-38.log
-rw-r--r--    1 root     root        2.4K Dec 20 10:27 service_broadcaster_2019-12-20-10-25-47.log
-rw-r--r--    1 root     root        1.4K Dec 20 10:28 service_broadcaster_2019-12-20-10-27-58.log

Why are the logs of previous runs present? Isn’t the container supposed to be non-persistent?

Hi there. From what I understand, when a shutdown is called the container is stopped in its current state and when the device restarts it picks up from where it left off. The files would only be cleared out if the container was recreated which happens when an update happens or an environment variable is set (not 100% sure on the env var one).

One thing to look at to better understand this could be the Balena Masterclass on services:

Thanks for your reply. This clarifies things. It is possible to restart the device container after a device reboot?

I believe you mean “recreate” and not “restart”. As Shaun mentioned above, the container is only recreated when you push a new update. Not sure what device this is, but if it’s using SD cards for storage, as general advice you’d best avoid writing to disk at all since it wears them out. In this specific case, the best approach would be to configure the process write to RAM via tmpfs, or skip writing at all and merely log to stdout if possible.

Thanks for your response. We have a container that sets some routes on startup using the route add command. However, when we reboot the device, these routes are no longer present. How can we make sure these routes are always present?

iptables that we set appear to be consistent. Is this different for route?


Usually, iptables rules should not persist, as well as modifiications to the routing table performed with route. This data is stored in kernel memory and, unless you explicitly do something to persist it and apply on boot, will not survive the reboot.

In iptables case, a typical way of persisting thee config is using iptables-persistent package.
I’m not sure about the routing table, but configuring routes with nmcli might also result in persisting the configuration.

The practice we follow (e.g. in balena-supervisor) is to keep the rules within the application and apply them on application start.

The practice we follow (e.g. in balena-supervisor) is to keep the rules within the application and apply them on application start.

This is exactly what we are trying to do. However, when we reboot the device, the application is not restarted. How can we work around this?

However, when we reboot the device, the application is not restarted. How can we work around this?

Do you mean that your container does not get started? Is the corresponding service reported as “running” in the web dashboard?

We are referring to what was stated above. So the way we understand it, the application in the container is not started again. Since the container is resumed.
However the route is cleared at a reboot. And we trigger a route set at application start. However the application start only happens when an update happens. So how to make sure the route is always set when a device is being rebooted?

Ok, I see, thanks for the clarification. Let me also clarify a couple of points.

When you think about container lifecycle, there are 4 main operations: create, start, stop, and delete.

After you create a release in balenaCloud (e.g. with balena push command) and a corresponding image is downloaded on the device, container is created, and then started by the supervisor. On reboot, container goes though stop/start lifecycle, which results in stopping all processes running in the container, and then starting its entry point again. When container is stopped/started the same image is used, and the container data is not touched. And on update, the supervisor stops and deletes the existing container, since a new image has to be used, and then creates ands starts a new container. Only data stored in a volume survives such re-creation.

So, now back to your reboot case. As it was mentioned above, the container is stopped and started in this case. It means that the container entry point is executed again. If this does not happen, something must be wrong with the engine preventing it from normally starting the container. For instance, the data partition on the device can be full.

You can verify if your container is started if you ssh to the host OS and run balena-engine ps. It will display a list of all running containers on the device, including the supervisor and your application container.
If you see that your container is displayed as started there, your entry point must have been executed, and the reason why your code didn’t execute must be on the application side. Some extra logging showing what is executed from your entry point will help with the debugging.

Hope this helps.

1 Like

This clarifies a lot, thanks!

We’ve now added the route add in the command of the docker-compose.yml is this different from adding it to the entry point of the container?


It depends on your container. The command of the docker-compose.yml is used to define what is executed as default command in the container. If your container defines an ENTRYPOINT then whatever is in command will be the arguments of whatever is defined in ENTRYPOINT. If there is no ENTRYPOINT defined command will be executed when the container starts.

A common pattern for such tasks like running route add or other setup tasks before the actual program is executed in the container is to create a start script, which executed these commands and then starts the actual program. The container image then uses this script as ENTRYPOINT or CMD.

An example of this pattern can be found here in the balenaSound repo.
This is the start script, that is executed for the spotify service:
In the docker image the start script is executed as command:

1 Like