@drewcovi I think the logic has to be in your start scripts. I don’t think you can put things into shared volumes in the docker build steps (as it would be needed in case the
Dockerfile would do the populating of the folder)
It would be something like
- in the build steps placing the files somewhere inside the image
- the start script on start would copy the files over that are needed, possibly replacing existing ones (from earlier updates), and maybe place any other files that would signal to the other container that the volume is shared with that it can be used
- the other container would react to this change, by monitoring the files in that volume, for example, or having an API that can be triggered from the first container when this above task is done, and act accordingly.
This idea is coming from other use cases when containers need to communicate, in general in the docker / microservices world. I know yours is not the same, but imagine the following:
- one container/service has a task to receive video files
- another container does e.g. subtitling based on the video
- the first container can place the video files in a shared volume when finished receiving
- the second container notices a new file, runs its task on it, sends on the result somewhere, and removes the file from the shared volume as task done.
Not saying that your setup is the same producer-consumer way, but the underlying method of communicating is the same so maybe the example is of some use