It appears the only condoned method of sharing data between containers is with named volumes which persist without a forced Purge Data
Is there any way we can share data between various containers, so that we can serve them up (via Samba for instance) but also allow the data to get overwritten on the next update?
Hey @drewcovi, in the container world, I think shared volumes are the usual way to share data between containers/services.
On the other hand, they don’t have to wait for a forced Purge Data, your own application can manage the lifecycle of the data shared on the volume (and that’s how containerized applications are doing things). For example one container puts data into the shared volume, and when the other container consumes that data, it removes it (or signal through other shared files to the first container that that can remove the data).
@imrehg amazing… would this need to happen on startup?.. I’m attempting to see if a simple webserver directory can be changed, but overwritten on the next firmware update.
This would mean the samba could update a shared volume, but the Dockerfile could copy new files over… Right now, even if I copy files over in my Dockerfile, they appear to be overwritten again once that shared volume is mounted…
So the only thing I can conclude is I would need to copy the latest webserver files to a third location and when I run my start script, I would then overwrite the data in the shared volume…
@drewcovi I think the logic has to be in your start scripts. I don’t think you can put things into shared volumes in the docker build steps (as it would be needed in case the Dockerfile would do the populating of the folder)
It would be something like
in the build steps placing the files somewhere inside the image
the start script on start would copy the files over that are needed, possibly replacing existing ones (from earlier updates), and maybe place any other files that would signal to the other container that the volume is shared with that it can be used
the other container would react to this change, by monitoring the files in that volume, for example, or having an API that can be triggered from the first container when this above task is done, and act accordingly.
This idea is coming from other use cases when containers need to communicate, in general in the docker / microservices world. I know yours is not the same, but imagine the following:
one container/service has a task to receive video files
another container does e.g. subtitling based on the video
the first container can place the video files in a shared volume when finished receiving
the second container notices a new file, runs its task on it, sends on the result somewhere, and removes the file from the shared volume as task done.
Not saying that your setup is the same producer-consumer way, but the underlying method of communicating is the same so maybe the example is of some use