Docker-compose support with pre-built images

Hi there,

we are migrating to resin.io from our custom deploy system which was based on multiple microservices deployed with docker-compose.

We are aware of an ongoing project to support multicontainer deployments, and in the meantime we are building a temporary solution based on the compose solution by @justin8:

In summary this solution ships a docker in docker (dind) container into the device and triggers docker compose to build and start internal applications as containers

Our main problem is that our internal applications have a bunch of private dependencies from our repos, and we are not very fond of building images inside the devices… The guys at resin suggested the usage of resin cli’s resin build/deploy commands to externally build and deploy our images…

At this point we can build but not deploy them, as the only thing that can be pushed to the application is the dind image.

Our solution is to maintain our own private build servers and our own private docker registry and give credentials for the latter to our devices in order to directly pull the internal applications from inside the dind image.

The questions.

  • Will this be closely compatible with the future resin.io multicontainer approach, i.e. by removing our private registry and pushing directly to resin’s?

  • In order to automatically update the internal application, we are thinking on using the resin API to force an application restart when any image is updated in our private registry, thus updating the container… But this is quite a bruteforce approach as it introduces quite a lot of unpleasant downtime in the device… Any thoughts?

Hey @jarias

My first impression is that your setup will be able to use our multicontainer infrastructure when it gets here with a minimum of changes. Instead of your private docker registry, you will be able to use resin’s, and also be able to use resin’s builder to build the services detailed by the docker-compose file.

The containers will then be distributed as normal to your devices, as they are now.

For your second point, I think that I would do something similar to what our supervisor does, and receive an event on-device when an update is available. This would mean that you would only have to update the container which has changed, rather than every single one (and you also wouldn’t have to restart the device). This could be as simple as a HTTP endpoint behind the resin public URL for your device, which then issues the correct docker(-compose) commands.

You may even be able to get the resin-supervisor to do some of this task for you, but @pcarranzav would be able to give you more details if so.

Let me know if you would like any further details,
Cameron

Hey just to follow up,

We are now moving into production with the previously described approach. Our applications is made of a bunch of microservices that are run as containers using the temporary docker-compose approach.

Our CI server uploads new docker images to our private registry, which are then downloaded into the devices automatically (they are specified in the docker-compose.yml file). We just modified the original mutlicontainer approach to docker login to our registry (credentials are managed by automatically cycled environmental variables) and use image references instead of building them.

New deployments are currently managed through the brute force approach where our CI server performs an application restart every time a new image for a microservice is updated, this introduces some downtime into our devices so we have scheduled the deploys at the end of the day (when activity is low, this also reduces the bandwidth usage for our customers). However this approach only downloads the new images, with docker-compose pull.

We are considering the additional effort of a fine grained update strategy via a custom API or by using the resin-supervisor (would this imply a custom supervisor deployment…?). We are implementing our own health monitor using AWS IoT, so this could just command the devices into pulling single images and rebooting specific containers by taking command of the docker daemon, but that is actually a long shot.

If anyone has any thoughts on this I would be pleased to hear them!

PS: Looking forward to first class multicontainer resin, let us know if you need any help/testers on the process.

looking forward to betatest this!