Grabbing private code in Dockerfiles

I’m fairly new to Resin, so apologies if this is a solved problem I just haven’t found the answer to.

I have a directory structure like the following:

├── private_repo_1
├── private_repo_2
├── private_repo_n
└── private_installation_repo
    ├── .dockerignore
    ├── Dockerfile

(The project is fairly large, so we want to keep the docker-related code tucked away in a separate repository.)

My Dockerfile needs access to the source code in the private repositories, but this seems to be a tricky problem. For one, I can’t simply RUN git clone <repo> because of security issues with ssh keys. Similarly, I can’t copy code from my development machine (e.g. COPY ../../project_root project_root) because of security issues with relative directories.

So, I’m trying to figure out a solution more elegant than some of the hacky solutions I’ve found. Does anyone know of good examples to follow?

I’m not familiar with Moby, so I don’t fully understand the bugs you’ve linked there. To build this image though, you’re going to need to give the build server access to checkout code from those repos during the build process somehow, which isn’t trivial. I’m afraid I don’t have any easy examples: for most cases it’s much easier to push a complete single repo to Resin directly.

We don’t yet have a fantastic solution to this. Right now the best option is to commit a SSH key that has access to clone the private repos (private_repo_1, private_repo_2, …) inside the Dockerfile repo (private_installation_repo), so that the Dockerfile build can access them to clone the rest of the code.

Of course this alone isn’t fantastic for security, and you should be very careful if you need to do this. You can reduce the risks significantly though by using deploy keys (assuming your repos are on Github). These allow you to generate an SSH keys with minimal privileges, that can read a single repo and no more, limiting the risk if they’re ever exposed.

If you want to control access to this further, you can look at tools like Vault, which allow you run your own hosted service that controls access to these keys, and commit a single key for that into your repo, which you can revoke, manage and audit independently of the SSH keys themselves. That’s quite a bit more involved though, so it depends on your needs and your security concerns here.

Thanks for the help, Tim!

I think the deploy keys approach will work just fine for now. I’ll take a look into Vault as well - that might very well be our long-term solution.

In playing around a little bit, I’m realizing it’ll be easier to have deployment pipelines for both development and production. I think the Vault solution you provided will work for production, but I’ve run into a snag with the development pipeline.

For my local development Dockerfile, it makes more sense for me to ADD project_root from local code than to RUN git clone for each private repo, but I need to put project_root in the Dockerfile’s context somehow (a hacky solution would be to mv my Dockerfile to the project root, then resin local push, then mv it back :dizzy_face:).

Ideally, I’d want to call resin local push from the directory containing project_root, while specifying the path to Dockerfile (as in this example). This would rsync my project_root over to my device, then call something like docker build -f private_installation_repo/Dockerfile . on the device itself.

Is anything like that currently possible or in the works?

@nckswt I don’t see any easy way of achieving that with resin as for now, but this is interesting feedback, thanks! looping in @kostas and @hedley maybe they have better ideas than me :slight_smile:

You might be able to do this by symlinking the individual private repos (private_repo_n) all into private_installation_repo. You’d essentially be creating a new root that’s exactly the same but which does have the Dockerfile at the top level, without having to actually move anything around . I assume resin local push handles symlinks correctly, but I must admit I’ve never actually tested this. Definitely sounds like a bug if it doesn’t though :smile:.

That’s a bit of hassle to manage, but you could easily set up a dev environment script to set it up, and potentially put it in a git hook to make that even easier.

That’s not a bad idea - I’ll try a few things out and report back! Thanks :smile:

1 Like

Just to follow-up: we’ve decided to unify our private repos into one repo to simplify our operations (cloning in the Dockerfile, managing PRs, setting up Jenkins, etc.).

Though, if we ever do have a repo outside of this unified repo, we’ll use a GitHub machine user with personal access tokens for authentication. Some of our (non-Resin-related) install code didn’t play nicely with using ssh-config to manage deploy keys :unamused:

Thanks for the help, Tim and Carlo!

@nckswt, glad to hear you have a solution (plus a jolly in the sleeve for future situations :slight_smile: )

@pimterry: I do have a similar issue. We need to fetch code from our private bitbucket server (self-hosted).
I played around and added a private id_rsa key to the Dockerfile and added the public key to the repository.
But I don’t seem to get this to work.
I have the following Dockerfile:

RUN mkdir -p /root/.ssh/
COPY id_rsa_resin /root/.ssh/id_rsa
COPY config /root/.ssh/config
RUN chmod 400 /root/.ssh/id_rsa
ENV GIT_SSH_COMMAND=“ssh -i ~/.ssh/id_rsa -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no”

RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa our.bitbucker.server.url >> /root/.ssh/known_hosts

My ssh config file looks like this:

Host our.bitbucker.server.url
Hostname our.bitbucker.server.url
IdentityFile /root/.ssh/id_rsa
IdentitiesOnly yes

But when it comes to the npm install steps later on, I get the following error message:

[main] —> Running in 85434b806424
[main] npm ERR! Error while executing:
[main] npm ERR! /usr/bin/git ls-remote -h -t ssh://our.bitbucker.server.url:7999/test/resin-access-test-repo.git
[main] npm ERR!
[main] npm ERR!
[main] Host key verification failed.
[main] npm ERR! fatal: Could not read from remote repository.
[main] npm
[main] ERR!
[main] npm
[main] ERR!
[main] Please make sure you have the correct access rights
[main] npm ERR!
[main] and the repository exists.
[main] npm ERR!
[main] npm
[main] ERR!
[main] exited with error code: 128
[main] npm
[main] ERR!
[main] A complete log of this run can be found in:
[main] npm ERR!
[main] /root/.npm/_logs/2018-08-22T09_41_25_955Z-debug.log

I also tried to add the key using:
RUN ssh-agent /root/.ssh/

but this seems to be not allowed:

[main] Step 11/20 : RUN ssh-agent /root/.ssh/
[main] —> Running in 858af7450af0
[main] /root/.ssh/: Permission denied
[main] Removing intermediate container 858af7450af0
[main] The command ‘/bin/sh -c ssh-agent /root/.ssh/’ returned a non-zero code: 1

So, any idea on how I could add our private key so that the build server is able to fetch the source from the private repos?


We had similar concerns.

We ended up resolving this by moving most of our docker work to shell files (which are located for us in a seperate repo). Our docker file contains one layer and looks like:

  • Apt-get update
  • Setup SSH agent and required tools
  • Install base64 encoded deploy key from docker file
  • git clone actual repo (e.g to /tmp/build)
  • run shell script (e.g /tmp/build/
  • cleanup (e.g rm /tmp -R -f)

@fritz it’s hard to be sure exactly what the cause is, but the Host key verification failed message you’re seeing in the output there is suspicious.

That means the key of the host is failing to verify, not the client key you’re connecting with. has some more details, and might be useful. Regardless though, that’s not a problem with your id_rsa key, and that setup might well be working correctly.

I should note here by the way that we are working on first-class support for secrets in builds as we speak. That should make managing secrets like these private keys much easier in future, though it won’t solve this host verification issue. We don’t have a fixed ETA yet, but you can check the status of that on our public roadmap here:

thanks to both of you for your remarks.
Indeed, there was an issue with obtaining the right host key.
So I added the known_hosts file with just the entry of our bitbucket server to the image and used:
COPY known_hosts /root/.ssh/known_hosts

That did the trick. The repo could be loaded successfully.