Caching a container

I have a RPi container which runs a python app. This python app requires many dependencies most of which do not change.

Currently it is tedious to make a change to the python code and have to reinstall all the deps every time.
On amd64 platforms it is a simple matter to make a container with the deps pre-installed, push that to a private repo and then use the FROM command to pull that container image instead.

Is there any facility on whereby I could push such a pre-built container image?

Hey @phewlett, just moved this to the Troubleshooting room to make sure our engineers see it and can help out!

Hi @phewlett,

There’s a couple of things you could do. One of which would be to create a base image with all of your dependencies installed and use that in a FROM in a Docker container. Right now you’d need to put that base image on a public registry as would need to be able to access it directly, but we’re working on adding build-time secrets support to the platform (which would enable private registries).

That said, that might be overkill if what you’re looking for is to speed up builds. We use Docker layer caching on the build server, so as long as your dependencies are on a higher line in your Dockerfile than what you are updating, they will be pulled from cache and not re-run.

So you could do this:

FROM resin/rpi-raspbian

RUN apt-get update && apt-get install package1 package2 package3

WORKDIR /usr/src/app
COPY ./ .

CMD ./

As long as any changes you are making happen in, everything up through the “WORKDIR” line in the Dockerfile will come from cache after the first build. So the first build might take 15 minutes to install your dependencies, but subsequent builds would take only a second or three to load from cache.

Thanks for the reply. Unfortunately the dependencies include python-grpcio installed with pip3 which when installing requires a compilation step involving gcc.
The Dockerfile is unable to determine if this has already been done so compiles every time which takes a while.

If this was amd64, I would simply pre-build a docker image, push it to my private docker repo and use FROM to get that image instead of the base image.

However I do not have an arm7 dev environment to do this except the one on

It would be neat if I could generate this pre-built image and push it to the docker repo on

You can use the resin build command of the resin-cli to generate an image for the RPi and then use it as a base image and import it normally with FROM.

Let me point you to a related forum thread:

Instead of pushing to Dockerhub, I can push to my private Docker repo?

I’m afraid that our builder can’t pull from private repositories at the moment, since it would somehow have to be authenticated against your private repository.

As discussed in this forum thread:

a workaround would be to use resin build and resin deploy locally in order to use you private images.

On the other hand, if the compilation step is independent of the previous steps, you could build a minimal public base image for that and then use multistaged builds, have an extra FROM referencing that base image and only copy the built artifacts. Let me point you to the respective documentation page:

As an additional alternative to workaround the private repo issue, you should be able to use a ‘FROM$UUID’ in order for our builders to pull your image without making it publicly visible (you would have to disable indexing of your registry).

Thank you everyone. I will probably use multi-stage builds…

Give us a note on whether & how it worked for you.

Not sure if this would help, but raspberry pi python wheels are now available, but we’re only enabled by default from raspbian stretch onwards.

So rather than having to compile the package, pip can just download the pre-compiled wheel.

Check out this link:

No… I am installing a wheel. Unfortunately some wheels with C dependencies such as python-grpcio require a compiler (gcc) and python (python-dev) to compile the code
on installation. which is painful.