Components of balenaCloud

#1

I’m writing my bachelor thesis about applying balenaCloud to automate the deployment process of the robots at the university. I’d like to describe the deployment process to some level of details, which requires some knowledge about the relationship between the components of the ecosystem (the balena API, Git server, Builder server, Docker registry, balenaOS as well as the Supervisor, …). So far I’ve gathered some amount of knowledge about the platform through the docs, from the forum posts (for which you guys have given excellent support) as well as from examining the source code. It would be really great if you guys can share some more details about the platform, especially those components that are not open sourced like the Git Server, Build Server and the closed source part of balenaCloud.

In particular I have some questions:

  • Which components communicates with the core API during the deployment process, and for what?
  • How does the CI-Pipeline look like in details - i.e. trigger of the image building process and the notifications to the devices about new application state, …? (e.g. through Webhooks, dedicated CI software like Jenkins or TravisCI or proprietary solutions built into the API, …).
  • Is the traffic between the devices and the whole platform (i.e. all components) are tunneled through VPN or just the communication between device and the core API?

Thanks,
Hieu Nguyen

0 Likes

#4

Oops, apologies @ngnmhieu, this slipped through the cracks.

I can help with your questions but I won’t go into much detail mostly so I don’t end up rumbling and confusing you; feel free to ask as many questions as you need and I’ll try my best to answer or find appropriate people.

You asked:

Which components communicates with the core API during the deployment process, and for what?

and

How does the CI-Pipeline look like in details - i.e. trigger of the image building process and the notifications to the devices about new application state, …? (e.g. through Webhooks, dedicated CI software like Jenkins or TravisCI or proprietary solutions built into the API, …).

I’ll answer these two together.

The core platform merely deploys Docker images and doesn’t really care how they are built. There are currently a few different ways to build and upload images. balena deploy uploads images present on the user’s local Docker daemon, building them locally from source if needed. git push and balena push upload the source and use our builder to create the images. git push goes to our builder via our git server, balena push goes to our builder directly. Images are uploaded to our registries.

Broadly, the client first authenticates with the API and figures out the application the release is for. Creates a new release for that application on the API based on the project’s manifest – i.e. the parsed docker-compose.yml file. The API replies with information about where the artifacts should be uploaded. The client then uploads the images to the registry and marks the release as successful or failed appropriately.

After a successful release for an application is created, the API pushes a notification to online devices that belong to that application via the VPN. Devices that can’t be reached at that point will find out about the new release when they next check in with the API. The service on the device responsible for pulling the update is the Supervisor, which will either just normally pull the new images or request a delta from the delta server (if deltas are enabled for the device and/or the application).

Is the traffic between the devices and the whole platform (i.e. all components) are tunneled through VPN or just the communication between device and the core API?

The VPN is only used to push a certain set of notifications from the API to devices that are online at the time. In our terminology, “online” means “connected to our VPN”. Examples of notifications include the notification of a new release as we saw, of the need to restart the user application, reboot the device, etc.

The only other traffic that goes through our VPN is from the “Public Device URLs” feature, and SSH user sessions via the Dashboard terminal or CLI.

I hope that helps.

1 Like

#5

Thanks for the detailed answer!

The images are built by the builders, which mean it’ll be uploaded by the builders to the registry. You mentioned the “client” in:

Broadly, the client first authenticates with the API and figures out the application the release is for. Creates a new release for that application on the API based on the project’s manifest – i.e. the parsed docker-compose.yml file. The API replies with information about where the artifacts should be uploaded. The client then uploads the images to the registry and marks the release as successful or failed appropriately

Do you mean the client is the builders or the balena CLI on the developer machine?

One question about the build-trigger after the code has been pushed to the git repository: is any kind of hooks used to trigger the build-process on the builders? Do you guys use any CI server like TravisCI or Jenkins?

0 Likes

#6

In this case both are examples of clients, and both follow the same workflow when it comes to creating the releases on our API.

We don’t use a CI like infrastructure for the git server <-> builder and instead the git server hits and endpoint exposed by the builder (and streams back the output through the same connection).

The hook which makes this call is a standard git hook, which performs some tasks, and forwards the source to the endpoint mentioned above.

Let me know if any of this isn’t clear!

0 Likes