I’m trying to figure out what happens when an application is to be updated, can you please check my assumptions are correct ?
1- Build app docker image (maybe several, if your app is multi-container)
2- Push them to registry.
3- Make a new release out of it
4- If VPN enabled notify supervisor there is a new app release (which supervisor API?), else just wait for it to check for updates
5- supervisor checks for update and fetches new desired “state” from api. (which API?)
6- supervisor pulls new images and restarts containers as required
All these steps seem very tied together (can be done with a single CLI command), I understand this simplifies user workflow but this also has the usual drawback of limiting possibilities. Why choosing to implement the supervisor as an application supervisor and not a “container list” supervisor (letting knowledge of the applications/services/releases at a higher level, most probably in the server) ? Can you explain how you ended up choosing this way of doing it ?
You are correct in how you have outlined the build process above. To better understand that process, we do have an overview of the system here, including some handy diagrams.
Can you elaborate a little more on what you are trying to do that is not supported, or what possibilities have been limited? Many of these steps are already decoupled (for example balena push vs balena build, and our staged releases example project for how to better control & pin releases). If we are better able to understand what you are trying to do, we should be able to help!
I typically want to enforce some “base” containers images (from my registry) running on the device while letting customer add “user” ones (from any registry).
The build process is out of my scope for “user” container images and is not in the “balena deploy” pipeline for “base” container images. I do see build+push and pull+run as 2 independent phases.
Ideally I would just provide the supervisors with a list of container images (+config/env of course) to pull and run (and where to pull them from).
I guess the supervisor does pretty much this but with the “application” layer constraint, I’m trying to figure out how to hook into the API server to generate the release contents so the supervisor gets the target “application state” I want him to deploy.
Hi @goireu,
I am not quite sure if I fully understand what you are wanting to do but it sounds like it is out of the scope of standard balena as you want to interfere with internal processes.
Have you taken a look at https://github.com/balena-io/open-balena ? You might be able to tailor that to fit your needs.
Cheers Thomas
Yes, I did look at open balena and thought it probably was the right path if any tailoring is required. Not yet sure this is required though.
Let me try and guess how the supervisor brings up the desired containers : for the known application ID, it fetches from the API server a configuration set (maybe this is just a docker compose file?) describing which docker images to pull and run. This configuration set is called a release.
If I pictured this right, I believe I just need to be able to create new releases myself. This way, I can take choices from my customers (“user” containers), add my stuff (“base” containers), and have the supervisors apply it.
I haven’t found yet how to create this release, for example with a docker compose file, without the CLI trying to build and push something to the “internal” registry.
I must be missing an important concept here
I hope this clarifies my question.
Once again, thanks for your help, you guys are doing an amazing job, balena is an incredible piece of software !
I am not fully aware of how the supervisor works internally but I believe there is a database involved that keeps track of releases and what images a release is assembled of .
I also assume that the build process that you seem to want to skip in parts makes sure that the images provided to the device / supervisor are actually compatible to the device architecture and to balena. Allowing arbitrary user provided images to run would not be suitable for balena as it would be hard to guarantee that the image will be able to execute on the platform.
From what you have described it looks like you want to allow users to run arbitrary containers together with your base containers and you are looking for a supervisor, that loads and executes these containers.
I will try to get a balena supervisor expert to shed some more light on this for you…
I typically want to enforce some “base” containers images (from my registry) running on the device while letting customer add “user” ones (from any registry). The build process is out of my scope for “user” container images and is not in the “balena deploy” pipeline for “base” container images.
It occurs to me that your balena app could consist of a docker-compose file where base images come from your private (or public) Docker registry (DockerHub, Google Container Registry) You’d have the freedom to combine your users’ images with your own images, and have resulting images referenced in your balena app docker-compose file.
I haven’t found yet how to create this release, for example with a docker compose file, without the CLI trying to build and push something to the “internal” registry.
But if what gets pushed to the internal registry is merely a reference to your own registry? For example, if a docker-compose.yml pushed to balenaCloud had an image instruction that pointed to your own registry?
I do see build+push and pull+run as 2 independent phases.
By the way, the balena deploy command skips the balenaCloud build server and pushes images directly to the “internal registry”. More details here and here.
I can pretty easily generate a docker-compose file and work it out with a balena deploy command, that works fine.
It looks like we’re introducing some unnecessary steps though : image pull on balena cli host, image push on “internal registry”. Instead we could just point to the original location in the “is_stored_at__image_location” field. This would also remove the need for the “internal registry” I guess.
I’ll try and play with release API to create a release solely from a docker-compose file. This may be my solution.
As @samothx pointed out, verifying the containers are compatible is a good thing, but I don’t believe we can guarantee this without actually building it, which is not an option for “user” containers in my case. Am I correct ?
Thanks again for taking the time to exchange with me on this topic, that’s really appreciated!
verifying the containers are compatible is a good thing, but I don’t believe we can guarantee this without actually building it, which is not an option for “user” containers in my case. Am I correct ?
I think the compatibility checks are mainly the device type and architecture, which in part also rely on the use of a Dockerfile.template file. Mainly you’d need to ensure that images pulled or built by your users were targeting the correct processor for the device. For example, many “Raspberry Pi” images out there target the armv7 architecture, which is supported by the Raspberry Pi 3 but not the Raspberry Pi 2 (which has an armv6 processor).