Raspberry Pi camera on Alpine 8-slim


We’re having a hard time getting the Raspberry Pi Camera to work on alpine-node:8-slim

After trying a lot, we’ve settled on a rather cumbersome strategy that involved backing up two libraries at /opt/vc/lib/, running an rpi-update script during the docker build process, plus creating a bunch of symlinks from /opt/vc/lib to /usr/local/lib.

This was working well, up until last friday. We’ve made a small modification to our docker file, which forced a rebuild. After this, every attempt to use the camera resulted in the famous error message below:

bash-4.4# raspistill -o cam.jpg
mmal: mmal_component_create_core: could not find component 'vc.camera_info’mmal: Failed to create camera_info component
mmal: mmal_component_create_core: could not find component 'vc.ril.camera’
mmal: Failed to create camera component
mmal: main: Failed to create camera component
mmal: Failed to run camera app. Please check for firmware updates

This problem presented on both a Raspberry Pi 3 and a Raspberry Pi Zero W.
Can you please point us to the best way to build a alpine8-slim container that interfaces well with the RPi Camera?


@ymaia do you mind sharing your Dockerfile, or at least the FROM line? I wonder if you are tracking the latest base image and something changed in that image between when you initially built it and the rebuild.


I’ll PM you the docker file later today.

But the requested line reads:
FROM resin/%%RESIN_MACHINE_NAME%%-alpine-node:8-slim

Yes, I believe there was a change on the base-image (last build was over 45 days ago).
Previous builds were always consistent regarding the camera.

Can we pin the build version more than what is being specified now?
Alternatively, do you have a alpine-node:8-slim dockerfile in which the camera works?


@mccollam, did this guy ever send over his dockerfile?

ok sweet, thx. let’s just wait for him to come back.


Were you able to take a look at our DOCKERFILE?
We already have people demanding us regarding the Camera, which is still not working (despite our attempts to diagnose / fix).


Our internal notes on this conversation say that we never received the dockerfile and that we are waiting for you to send it over. I’ll reach out to you via our PM address, and then attach it privately to this thread.

Have reached out to the email address listed in your profile.

Just replied to that e-mail. Thanks!

I have received that dockerfile. We’ll let you know here of anything we find.

Thanks for sending over your Dockerfile. Generally it looks good so I don’t see anything immediate that I would expect to cause issues. My current theory is that it is indeed something to do with a base image update causing issues. Since you’re starting with

FROM resin/%%RESIN_MACHINE_NAME%%-alpine-node:8-slim

this means that you’re always grabbing the latest available Alpine Node image every time you build.

Do you mind trying changing that out for a specific tagged build from before you were seeing this issue? E.g.

FROM resin/%%RESIN_MACHINE_NAME%%-alpine-node:8.9.3-slim-20180111

That should grab a build from January. (You could also try 8.9.0-slim-20171123 from last November if that’s more in the timeframe that you know was working.)

It’s generally best practice to use a tagged build regardless to head off these sorts of changes – that way you always know that your base that you are starting from has not changed in any way when you are doing a new build.

Hi Ronald,

Just tried with “8.9.3-slim-20180111”, but that did not worked (the build failed during one file removal from /boot, which seems to be part of the problem, as this started right when the camera failed).

Then I’ve tried with 8.9.0-slim-20171123. This one failed right from the get go with:

[Error] Could not fetch base image: resin/raspberry-pi-alpine-node:8-slim:8.9.0-slim-20171123

From what I could determine, we’ve had a build on Dec 31th 2017 which was successful.
Therefore, can you double-check the build version that was current on that date (for both “raspberry” and “raspberrypi3” and provide us with the correct build string? I was not able to find this information on docker hub (builds from 2018 only)

One more thing: So I can rule-out caching issues, can you tell us if there is a way to force the build process not to cache?

Thank you,

Hi there,
so regarding the specific image that you are looking for I will have to ask the team to provide any info they have.

Regarding the cache question, I can suggest two ways:

(1) you can add an ENV var at the point in the Dockerfile after which you want to bust the cache, and change the value to trigger a rebuild for all the layers afterwards. Eg.

# Change the value of this env var to rerun the following commands without cache

# apt-get install...
# COPY ..
# CMD ...
# RUN ...

(2) Forcing rebuild could be done by pushing to the resin-nocache branch, i.e.
git push resin master:resin-nocache


Ok Ilias. We’re waiting on the image name so we can restore that functionality.


@ymaia You could try 8.9.0-slim-20171207 which was before your 31 December build.

If you’d like to grab a list of all the tags for this build (in case you want to try other tags or other versions) you can do something like the following:

wget -q https://registry.hub.docker.com/v1/repositories/resin/raspberry-pi-alpine-node/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'

This will give you a (long!) list of all the tags for the Alpine node builds.

Hi Ronald,

We were able to get this sorted.
That wget command allowed us to choose a build image that was working (this should be posted to the documentation somewhere).
We then advanced incrementally until we found what broke our strategy. After that was resolved, we’ve chosen a more recent base image (which is now pinned, so we can avoid further surprises).


Excellent, I’m glad to hear everything is working!

For reference, the wget command I posted is just scraping Dockerhub for tags – this isn’t something specific to resin.io but is a more general Docker registry concept. So you should be able to use any other tools, scripts, etc. that apply to Docker registries here as well.