Restart container at every update

Hi all,

When I deploy an update, not every container contains changes. So not all containers are downloaded and restarted again (which is great, unnecessary downloading/restarting would be stupid). But I have a ElectronJS kiosk container, which connects to another container. The ElectronJS contains a placeholder until the other container is started. I’ve created a (healthcheck-)mechanism for this and that works great!

However, that container is updated 9 out of 10 times when there is a new build. This container contains a webserver with the content that has to be shown in the ElectronJS kiosk. But after an update of this container, the kiosk doesn’t restart obviously. So when there are interface changes, the user has to reboot the whole system. That is, of course, not really the nicest way to go.

So, my question is, is there a way, and what is the best way, to restart another container when the container is updated?

I’ve thought about it, and what I’ve come up with is when the webserver container starts up, it sends a request to the supervisor to restart the ElectronJS container. But that’s not necessary when the container isn’t updated. I’ve had multiple ideas, but I thought, I can better ask the experts and hear what they would do! :slight_smile:

Thanks in advance!

Hey,

Your mechanism of getting the supervisor to restart the service is a logical one. That is what I would do.

You could always use the same channel to get the release ID for the webserver container, write it to disk in the container, then only restart the kiosk if that value is different. Logic being; if the value matches then don’t restart the kiosk, otherwise do.

What do you think?

Hi @richbayliss,

That’s exactly my thought about the release ID. I can save that in a text file in a volume.
I just posted this to gather some thoughts about this issue. I’m going to investigate this some more to make the user experience as best as possible. Another thought would be to just refresh the webpage or go to the placeholder webpage when the container is updating or has updated.

I’ll post my progress and my solution here!
If anyone has another thought, I would like to hear them!

Thanks!

@bversluijs another idea I think is using the docker-compose.yml's depends_on setting:

I think that would get your startup order correct as well. We use that in our multicontainer-getting-started project too, see:

Let us know if you had a chance to try either!

Hi @imrehg,

I use the depends_on function. It’s too bad that Balena doesn’t support the healthcheck option with depends_on. But the depends_on function doesn’t restart the kiosk when an update is installed. It only just waits the first time that the dependent container(s) started. Not fully functional, but just started. The healthcheck option in Docker V2.1 comes in handy there. But that doesn’t fix my problem though :slight_smile:

It’s not criticism by the way, but just shared some thoughts!

Correct me if I’m wrong regarding the depends_on, but so far, I didn’t see this behavior when updating the containers.

Hi @bversluijs,

Indeed we still don’t support using depends_on with a healthcheck.

But, you can do this:

  1. The webserver exposes an endpoint where it exposes its own version (whatever versioning system you want to use). Alternatively, if you don’t wanna write this endpoint, you can use the supervisor’s API to get the releaseId for the webserver container: https://www.balena.io/docs/reference/supervisor/supervisor-api/#get-v2-applications-state
  2. On startup, the ElectronJS container queries the webserver version and stores it.
  3. The ElectronJS container keeps polling the webserver version and comparing it to the stored one. If the version changes, it hits the supervisor API asking for a restart.

Alternatively, instead of hitting the supervisor API for a restart, you could make the webserver version check part of the ElectronJS’ healthcheck script. balenaEngine has a feature by which it restarts any container that is marked unhealthy, so you could make the healthcheck exit 1 if the webserver version doesn’t match the stored one.

Not sure if these are the best solutions, but it’s what I would try if I were in your place :slight_smile: It’s also considered a good practice to make your application not rely much on depends_on, but rather to check whether the other things it depends on are available at any particular moment.

Hope this helps!

Apologies if I’m not totally understanding the recommendation here: should we not use depends_on and instead use a HEALTCHECK? Or is HEALTHCHECK completely useless in Balena? I’m attempting the use of a healthcheck, but it simply keeps restarting the service.

@drewcovi no, you can and should use depends_on but its independent of healthcheck. The healthcheck is useful on balena and balenaEngine will restart containers that become unhealthy. You can see an example here https://github.com/balena-io-playground/healthcheck-publicurl

@drewcovi no, you can and should use depends_on but its independent of healthcheck. The healthcheck is useful on balena and balenaEngine will restart containers that become unhealthy. You can see an example here https://github.com/balena-io-playground/healthcheck-publicurl

This totally makes sense. Is this supported in the docker-compose file as well? or only in each independent dockerfile.template?

You should be able to use healthcheck field in the docker-comose.yml as it seem to be supported by the supervisor: https://www.balena.io/docs/reference/supervisor/docker-compose/

Here’s how: https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck

1 Like

The last reply to this topic is over 3 years old. Any improvement since then? This feature (restart all containers on update) seems like an obvious one for the platform to handle.

Hi there,

I believe that the behaviour today remains the same - if the container is unchanged, it will not restart when a device is updated to a new release. This is in line with the docker-compose restart behaviours.

As a result, I also believe that the originally proposed method to get this behaviour is to use the depends_on field in the docker compose file. When a container is restarted, all dependant containers should also be recreated.