(The project is fairly large, so we want to keep the docker-related code tucked away in a separate repository.)
My Dockerfile needs access to the source code in the private repositories, but this seems to be a tricky problem. For one, I can’t simply RUN git clone <repo> because of security issues with ssh keys. Similarly, I can’t copy code from my development machine (e.g. COPY ../../project_root project_root) because of security issues with relative directories.
So, I’m trying to figure out a solution more elegant than some of the hacky solutions I’ve found. Does anyone know of good examples to follow?
I’m not familiar with Moby, so I don’t fully understand the bugs you’ve linked there. To build this image though, you’re going to need to give the Resin.io build server access to checkout code from those repos during the build process somehow, which isn’t trivial. I’m afraid I don’t have any easy examples: for most cases it’s much easier to push a complete single repo to Resin directly.
We don’t yet have a fantastic solution to this. Right now the best option is to commit a SSH key that has access to clone the private repos (private_repo_1, private_repo_2, …) inside the Dockerfile repo (private_installation_repo), so that the Dockerfile build can access them to clone the rest of the code.
Of course this alone isn’t fantastic for security, and you should be very careful if you need to do this. You can reduce the risks significantly though by using deploy keys (assuming your repos are on Github). These allow you to generate an SSH keys with minimal privileges, that can read a single repo and no more, limiting the risk if they’re ever exposed.
If you want to control access to this further, you can look at tools like Vault, which allow you run your own hosted service that controls access to these keys, and commit a single key for that into your repo, which you can revoke, manage and audit independently of the SSH keys themselves. That’s quite a bit more involved though, so it depends on your needs and your security concerns here.
In playing around a little bit, I’m realizing it’ll be easier to have deployment pipelines for both development and production. I think the Vault solution you provided will work for production, but I’ve run into a snag with the development pipeline.
For my local development Dockerfile, it makes more sense for me to ADD project_root from local code than to RUN git clone for each private repo, but I need to put project_root in the Dockerfile’s context somehow (a hacky solution would be to mv my Dockerfile to the project root, then resin local push, then mv it back ).
Ideally, I’d want to call resin local push from the directory containing project_root, while specifying the path to Dockerfile (as in this example). This would rsync my project_root over to my device, then call something like docker build -f private_installation_repo/Dockerfile . on the device itself.
Is anything like that currently possible or in the works?
@nckswt I don’t see any easy way of achieving that with resin as for now, but this is interesting feedback, thanks! looping in @kostas and @hedley maybe they have better ideas than me
You might be able to do this by symlinking the individual private repos (private_repo_n) all into private_installation_repo. You’d essentially be creating a new root that’s exactly the same but which does have the Dockerfile at the top level, without having to actually move anything around . I assume resin local push handles symlinks correctly, but I must admit I’ve never actually tested this. Definitely sounds like a bug if it doesn’t though .
That’s a bit of hassle to manage, but you could easily set up a dev environment script to set it up, and potentially put it in a git hook to make that even easier.
Just to follow-up: we’ve decided to unify our private repos into one repo to simplify our operations (cloning in the Dockerfile, managing PRs, setting up Jenkins, etc.).
Though, if we ever do have a repo outside of this unified repo, we’ll use a GitHub machine user with personal access tokens for authentication. Some of our (non-Resin-related) install code didn’t play nicely with using ssh-config to manage deploy keys
Hi, @pimterry: I do have a similar issue. We need to fetch code from our private bitbucket server (self-hosted).
I played around and added a private id_rsa key to the Dockerfile and added the public key to the repository.
But I don’t seem to get this to work.
I have the following Dockerfile:
RUN mkdir -p /root/.ssh/
COPY id_rsa_resin /root/.ssh/id_rsa
COPY config /root/.ssh/config
RUN chmod 400 /root/.ssh/id_rsa
ENV GIT_SSH_COMMAND=“ssh -i ~/.ssh/id_rsa -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no”
RUN echo “$GIT_SSH_COMMAND”
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan -t rsa our.bitbucker.server.url >> /root/.ssh/known_hosts
We ended up resolving this by moving most of our docker work to shell files (which are located for us in a seperate repo). Our docker file contains one layer and looks like:
Apt-get update
Setup SSH agent and required tools
Install base64 encoded deploy key from docker file
I should note here by the way that we are working on first-class support for secrets in builds as we speak. That should make managing secrets like these private keys much easier in future, though it won’t solve this host verification issue. We don’t have a fixed ETA yet, but you can check the status of that on our public roadmap here: https://trello.com/c/ucTQpzRE/44-build-secrets-and-variables
Hi,
thanks to both of you for your remarks.
Indeed, there was an issue with obtaining the right host key.
So I added the known_hosts file with just the entry of our bitbucket server to the image and used:
COPY known_hosts /root/.ssh/known_hosts
That did the trick. The repo could be loaded successfully.
Great.
Thanks!
Fritz