Container lifecycle


Are there any differences between the docker container lifecycle, for example the flow chart at

and the balenaCloud device lifecycle of a service?


@jason10 there’s not a quick answer to this, but the main difference is that the actions and changes are instigated via Docker API requests rather than CLI actions, although the underlying actions are the same. The lifecycle is a subset of the diagram from the Medium post you linked as we do not (currently) support all of the same actions. For example, pause and unpause of containers is not currently supported.


Thanks, and the diagram does not have any healthcheck arcs.

I was confused earlier because the dashboard wasn’t showing my container as restarting every 10 seconds as that is too fast for the dashboard. But using balena ps or opening a Terminal and watching it get disconnected was sufficient to see that the container was being restarted.


@chrisys But when the CMD of the dockerfile.template for a service exits, should we see the container restart?


@jason10 yes, the container would exit and, unless you’ve changed the restart policy (we set “always” as default) it will be restarted by balenaEngine.
Unless you’re running something like systemd inside the container, which can prevent the container from exiting when the CMD process exits.


I think the diagram might not have healthcheck arcs because in regular docker, a failed healthcheck triggers no actions other than marking the container as unhealthy.

When a healthcheck fails on a container being run by balenaEngine and a container becomes unhealthy, we restart it. I guess that could also count as a difference with that diagram.


@pcarranzav by systemd, do you mean a privileged container? I have initsystem on and establish a udev rule.

Answering my own question:

root@orbitty-tx2-ec60d19:/usr/src/app# ps ax
    1 ?        Ss     0:00 /sbin/init quiet systemd.show_status=0
   38 ?        S      0:00 sleep infinity
   52 ?        Ss     0:00 /lib/systemd/systemd-journald
   54 ?        Ss     0:00 /lib/systemd/systemd-udevd
  129 ?        Ss     0:00 /usr/sbin/cron -f
  134 ?        Ss     0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
  146 ?        Ss     0:00 avahi-daemon: running [orbitty-tx2-ec60d19.local]
  147 ?        Ss     0:00 /usr/sbin/sshd -D
  149 ?        S      0:00 avahi-daemon: chroot helper
  272 pts/1    Ss     0:00 /bin/bash
  291 pts/1    R+     0:00 ps ax


By systemd I mean “INITSYSTEM = on”. This starts a systemd init process that keeps the container running.

(A container can be privileged but not have systemd)


Since I see systemd running, I guess I should use an api call to restart the container?


Yes, you can use an API call to the supervisor API, or you can make a healthcheck command that fails if your main process isn’t running, which will eventually also cause a container restart.


ah, like “pgrep mycommand”?!
I tried a health check command based on the logging of my main process but it isn’t working the way I want it to yet.


Yes, something like that could work… depending on what your command does, you could do something more comprehensive.

For instance, if your command exposes an API, you could add an API endpoint that responds whether your app is healthy, like we do in the supervisor container:

Or you could make your command write to a file at regular intervals, and check the file modification time in the healthcheck command, much like a watchdog.


I tried checking for modifications to the log file, but either the boost::logger is buffering or stat isn’t getting the right time, etc.

I’ll try pgrep. The service connects to a USB camera and send images to another service. So health is not something the camera service provides.


@pcarranzav is there a way to get output from the healthcheck?

Does the healthcheck have access to files on the shared-volumes specified in the docker-compose?


Hi Jason, Not sure I fully follow the question. What output from the healthcheck? Do you mean you want to be able to query if some container is unhealthy? As per the documentation on healthcheck ( ) , the healthcheck test command can only have the following outputs:

0: success - the container is healthy and ready for use
1: unhealthy - the container is not working correctly
2: reserved - do not use this exit code


I meant any stdout / stderr from the command or script to help with checking the healthcheck. In the end I copied all output to a log file on /data so that I could see if I had missed anything in my script.


Ah okay, that makes sense. I don’t think there is a super easy way to get the healthcheck commands, but logging to a file is probably a good way.


@jason10 you can try sending the stdout from your healthcheck command to /dev/console; this should send the output to the container’s logs, so they’d be visible in the balenaCloud dashboard. (disclaimer: I haven’t tried this :stuck_out_tongue: )


I haven’t seen the healthcheck outputs in the dashboard console. I’m using tee to send the output to both stdout and the log file on shared persistent storage. And then opening a shell to another container on the same device to make sure that the healthcheck and the container it is checking are working together.


Is that working for you now @jason10 ? Also wonder if you tried Pablo’s suggestion of directing the Healthcheck command output to /dev/console?