multicontainer python shared variable

I have a multicontainer application and I want to share a few variables between the running python programs to trigger events. What is the best method for doing this? I have been looking at Dbus, sockets, and multithreading. I am unsure which is best and/or which will be huge time sinks. Are there straight forward examples you can point me to?

Hey @mdcraver can you elaborate on your use case a bit more and give us a little more context? I’m mainly wondering what the frequency of change and required response time would be - for example how soon after the variable is set/changed does your event need to trigger? Are we talking seconds or milliseconds? What is the data that you’re looking to share?

I’m assuming here that they are going to be changing quite frequently and are not constant, and as such you’ve ruled out the use of balenaCloud environment and service variables.

One of the programs is waiting for the user to start a test. At which point the test control program will be flagged to start running the test. Then when the test is completed, seconds to minutes later, the test control program will flag the original program that the user used to start the test and it will generate the report.

I haven’t ruled out the use of balenaCloud environment and service variables, I wasn’t aware this was a use case for them.

@mdcraver thanks for the context! My colleague @samothx suggested using a shared data volume between containers and simply storing this flag in a file on disk which I agree sounds like a possible and simple solution here, how does that sound?

That might work for this current issue, but as this project progresses I can imagine needing a solution that doesn’t require frequent disk access. It also seems a little heavy to use a shared volume to just pass a flag. Do you know a solution for sharing variables that will scale better?

Also, I am currently planning to use the shared volume method to share the large amount of test data between containers.

Hi, did you evaluate the above suggestion of balenaCloud environment and service variables to see if they fit your needs? Alternatively, we’d probably recommend using some of the traditional protocols such as HTTP, gRPC or This has been discussed before and there is an open issue about improving our documentation around container-container communication.

I looked at them, but aren’t they treated as constants? This needs to be a state flag that indicates if the test is running or completed and will change based on user input.

You can update them e.g. via the API or SDK While they likely could work they may not be a good fit for your project though without further investigation.

That is good to know about, I can imagine uses for this in the future. However, they seem to not be made for short lived variables.

Can you point me to any potential solution for flag or small variable sharing between programs (Python in this case) that are each in different containers? I could use the “save to file” option mentioned above, but it seems a bit heavy for this case.

Unfortunately, I am not aware of any answer that would “just work”. TCP, Unix sockets, shared volumes, SHM, and balenaCloud variables are all possibilities that you will have to evaluate according to your own needs.

Some multi-container projects (like the Linux Foundation Edge) use a consul ( based container where other containers register. It is used for service discovery and central configuration. It is a more complex but more reliable solution.

Thanks for all the input.

I think I am going to try using a network connection. It looks like using networking in compose should be pretty straightforward. Are there any balena specific issues relating to using networking between containers that I should be aware of?

Hi, there is some more information in this Masterclass, specifically The only things balena specific network “issues” would be referenced in unsupported compose fields here:

I am unsuccessfully trying to implement multicontainer networking with Python. Following are my testing files. Currently, I am getting a “broken pipe” exception. I am sure the problem is something simple I have missed, but I am not seeing it.

Following are my testing files.

docker-compose.yml.pdf (16.8 KB)

Container_A-Dockerfile.template.pdf (18.3 KB) (18.0 KB)

Container_B-Dockerfile.template.pdf (18.3 KB) (18.2 KB)

Hi. Looks like there are multiple issues with the code you posted. In container_A it should be .connect() in place of .bind(). In container_B, the port it binds to is incorrect. The balena-socket feature is also unneeded in the compose file (

There are many ready-to-use libraries for what you are trying to achieve though, which is probably be easier than rolling your own from low-level sockets.

What are some example of the ready-to-use libraries I could look at? I don’t really want to roll my own low-level sockets, but I hadn’t found a solution during my searches.

Searching for RPC or IPC should give you some alternatives: For instance, this json-rpc package: There were also others suggested in this thread.

After looking through the alternatives I decided on Pyro. To start, I am just implementing the pyro warehouse example.

The name server is successfully created:
04.02.20 15:32:10 (-0500) pyro-ns NS running on localhost:9090 (
04.02.20 15:32:10 (-0500) pyro-ns Warning: HMAC key not set. Anyone can connect to this server!
04.02.20 15:32:10 (-0500) pyro-ns URI = PYRO:Pyro.NameServer@localhost:9090

However, I am running into difficulty having the pyro NameServer be located by the other containers. Am I missing something in my compose file?

docker-compose.yml.pdf (16.7 KB)
pyro-ns_Dockerfile.template.pdf (18.0 KB) (17.1 KB) (18.0 KB)

Hi there.

I’m not familiar with Pyro (or even much Python TBH) however reading your configurations all seems well. Not knowing the specifics of Pyro, you should not need a name server for your system because the containers will be linked so that they’re accessible by service name, eg from container_A you should be able to just connect directly to container_B and the container engine will handle the DNS resolution for you (e.g ping container_B from a shell inside container_A).

Here’s the documentation on networking for multicontainer applications. You may also want to take a look at the docker-compose networking reference for some additional tips.

Good luck!

Thanks for all your help. I managed to get Pyro4 to work and allow me to expose objects from other containers.

The following post contains the working solution.