We’ve got a use case EXTREMELY similar to the following thread where we have one version of our application that runs an integrated app/api with an attached screen and then one that runs with only the api and communicates over TCP/IP.
The difference here is that the integrated device with a screen is a Pi 3B+ and the standalone api is run on a Pi Zero W. They will share 2 of 3 services with the same exact code base. That means I’m forced to do two difference applications, which seems like the best option anyways, but I don’t really want to maintain two separate repositories.
What would work well is if I could somehow maintain two separate docker compose files, one that contains all three services and another that only has two. The second best would probably be if I could optionally start services but it doesn’t seem like that is possible. Being a relative docker noob I wanted to post here and see if you all had any suggestions.
This answer on SO seemed like a good option but I’m not sure how to define the key based on variables or what kind of options we had specifically to Balena.
Hi there, another solution if you have different Dockerfile.template for your services is that you could check the device type before running the services you need to include/exclude, something like if $wrongDeviceType then sleep infinity so that you can run services based on your needs.
I’ve just opened a feature request on balena-cli for command-line specification of the docker-compose.yml file to use [1]. For now, though, we simply use the one file. If you wanted, you could define two files with differing services, and you could write a small script in the repo that you will use to deploy, which copies one or other other of the docker-compose.yml files from a subdir into the root, just before running balena push or balena deploy.
I have actually thought about trying to implement something like this with Travis CI so when I push to master it picks up the branch and deploys with one compose to one application and another compose to the other. One reason why I like this approach versus the one @thundron mentioned, as you both have already covered, is that the service isn’t built and downloaded when it isn’t being used.
I guess I just need to start setting up the integration, which is something I’ve thought about doing for a long time just with a single application and simple dockerfile.