Encryption, access to Docker container

I have a general question about the “hackability” of a resin image. I’m basically using the Dockerfile to run my nodejs script on the raspberrypi-node image. My question is how easily can people find the code I push on the actual filesystem? Was Resin.io built on the premise that if someone “finds” the SD-Card, they can’t easily retrieve the source code, or was that “security” never the idea?

2 Likes

This is a very good question. The answer is that it depends on the platform. What I mean by this is whether or not the hardware you are running on has the ability to securely store encryption credentials, and provide hardware primitives that allow for a secure boot implementation.

On the raspberry pi this is not possible to do because it lacks such hardware. Anyone getting the SD card of a raspberry pi will be able to grab the file contents of the operating system and your nodejs container.

To see why this is true, let’s assume that the operating system utilised full disk encryption to protect the contents of the SD card. Now, let’s assume the device reboots or power cycles. In order to run your code again, the system needs to be able to boot and load your nodejs application in memory. This means that somewhere along the boot process the raspberry pi will have to get access to the decryption key in order to read the files. But, the only place for storing the decryption key is the SD card itself! So someone who gets his hands on the SD card can follow the same decryption process and read all the files.

On other, more advanced, hardware platforms you can do a lot more. While I can’t give any timelines at this point, I can say that we definitely want to support platforms that feature a TPM chip and allow for a secure boot implementation. In this scenario the decryption key will only be given to the CPU if the platform booted and ran a signed version of the software.

That said, if your threat model involves an adversary that has unconstrained physical access to your device then there are more attack vectors to account for and need to be considered on a case by case basis.

3 Likes

Hi Petro, great. That’s very helpful. Thank you.

1 Like

Hi Petro, I might have a useful idea. It would be nice if somehow you could package the nodejs application into a single executable. That way, I would push it to git. The resin.io server creates an executable (or something packaged so they user can’t access the original code files) and this then gets used by the Dockerfile. Something like this: https://github.com/jaredallard/nexe

This would avoid users to be able to see the full source code.

This is a very good idea, and it will be especially useful for C or compiled projects in general. We will have to do quite a bit of work to make this possible in our build pipeline. Currently all the builds resolve into a Dockerfile being built which makes things simple.

We’re closely following docker for implementation of required features. You can follow the discussion here https://github.com/docker/docker/issues/14080

Hi Petro, I actually worked out an alternative. I used jx (http://jxcore.com/) so in my Dockerfile, I compile my nodejs project in one jx executable, then I delete all my scripts and run the executable. It then becomes quite difficult for someone to read through the server-side source code.

Hi Robin. Unfortunately your alternative won’t work due to the way docker works.

I assume somewhere in your Dockerfile you have a COPY . /usr/src/foobar which transfers the source code you want to protect in your Docker image. After this point, anything you do in the next RUN commands won’t affect the layer created by that COPY. So even though the latest layer of your image will have your source code marked as deleted, someone can easily inspect just the layer that did the COPY and get the source code.

Try running docker history on some image :smile:

The only way to do what you need is fetching it from some external source instead of pushing your code and COPYing it in. It is important that the fetch, compile/obsuscate, delete sources steps are all done in the same RUN command, to avoid creating extra layers that leak your files. A bit ugly, bit it will work.

Hi Petro, good point. I forgot about the Docker layers of course. I now made 2 apps. 1 DEV where I compile everything on a raspberry and in the end upload the binary onto the cloud. Once I’ve tested that code locally, I basically change the URL in an environment variable, which forces the LIVE resin to pull the latest image and restart the application.

Hi Petro,
I ran across a problem where I need to store credentials, certs, and keys a little more securely than just baking them into a Docker image. I ended up going with Vault (https://vaultproject.io) and using an already-existing ramdisk for storing the credentials. This made it a little more secure in that it would have to be hacked without removing power from the unit. That ramdisk is likely too small for storing a codebase, though. For your use case, you may be able to use luks to create an encrypted volume in a loop file (randomly generate the key on container start, and don’t store it to disk) and use Vault (or an environment variable) to distribute a git deploy key for your code. Drop the code into the luks volume and run from there. If power is removed, the key for unlocking the code doesn’t exist, and you don’t have to fool around with a body of code sitting on a ramdisk.

Not perfect, but harder to pop.

1 Like

Hey @ashmastaflash sounds interesting. To poke it a bit more, with these issues the question I have is always how to do the first authentication in the chain? For example in your Vault setup, how did the device get access to the secrets stored? Have any example project we can check out? Cheers! :slight_smile:

Hey @imrehg, here’s some example code:

This is the function that grabs the secrets from the vault: sensor/config_helper.py at master · sitch-io/sensor · GitHub

The keys in the vault are all self-signed, and the tool that generates them and uploads them into the vault is here:
GitHub - sitch-io/sitch_self_signed_seeder: Creates self-signed certs and uploads into Vault

The ConfigHelper class (referenced above) gets the Vault URL and access token from the Resin project’s environment variables. The runner.py file ( sensor/runner.py at master · sitch-io/sensor · GitHub ) uses the information in the ConfigHelper class to determine where to drop the keys. In this case, they’re hardcoded to /run/dbus/crypto ( sensor/config_helper.py at master · sitch-io/sensor · GitHub)

TL;DR: Environment variables are used to deliver the URL, secret path, and vault token. The application uses these things to retrieve the crypto bits and deposit them into a part of the file system that exists only in RAM.

1 Like

Hi Petro,

I’m trying to understand what are the necessary steps someone must take to get the contents of my code. You said to run a docker history command to some image, but where do I find that image? I took the sd card of the raspberry pi 3 and put it inside a laptop running Linux. Automatically, the system opened 4 partitions:

  • resin-boot
  • resin-rootA
  • resin-state
  • resin-data

I just found a docker folder inside resin-data that I could not open (was protected somehow). Also there wasn’t nothing inside /usr/src (the folder app is built on runtime?).

In the end I guess I will package the node app using pkg and then delete the sources( as RobinReumers pointed out), locally, and then copy the binary to the device, in a Docker RUN command.

Thanks a lot in advance!

Hey @arthurmoises, I think when @petrosagg said that “run docker history command to some image”, he didn’t mean a specific image, but “look how much history is in any random docker image that you might have”.

Looking at your application container creation pattern you mention, I’m guessing you could make good use of Multi-Stage Builds: there you could package your app in a build image, and only copy things to the runtime image that you need. Just double-checked, that the runtime image’s docker history does not have anything from the build image (as it should be, just good to check:)

As for checking the image, I bet you could point a local docker daemon instance onto the SD card’s docker storage, and then the history/content of the images could be read there. If someone has an SD card image, then it would probably much easier to just run it on a Pi3 and log into it to use the system’s own docker (as if it was a dev image, e.g. trivially adding their own ssh key to the SD card before starting up).

Just some thoughts…

1 Like

Hi @imrehg! Thanks for the reply!

I’ll try what you said. After that day I managed to read the content of the docker folder using root. I’ll check those multi-stage builds, probably it will solve my problem!

Thanks a lot!

1 Like

Hello, a pythonista here.

Would the following strategy be safe ?

Inside the dockerfile, I would-

  • COPY the code
  • build it using some build tool, like Cython
  • rm -r the code

Using this strategy, will the code touch the device? (since i remove the code in Dockerfile, and keep only the generated binary)

This would also solve the headache of building an ARM binary on my x86 machine.

Hello,

If you are copying the code and then compiling it, you probably should take a look at Docker multi-stage build. In that case, you have to properly handle your dependencies.

Be aware that each command on your Dockerfile creates a layer. Therefore, if you copy the files in one command and remove them in another, even if the code is not available in the final image, it is available in a intermediary layer.

Cheers

@RobinReumers did you ever find a solution to this ? looking to do a similar setup

Hi,

For passing sensitive data, docker secrets offers several options, essentially leaving behind the sensitive data without exposing it your containers. This is also workable in single- or multi-stage builds.

John

@jtonello i don’t see what that has to do with securing files / code on file system ?

Hi,

I was referencing an earlier mention of certain container content, not all code. Sorry for any confusion.

John