Support for Docker 19.03

Hi all,

Any indication when Docker 19.03 will be supported in BalenaEngine to support NVIDIA GPU’s? As in moby/moby#38828

Cheers!

Hi,

I’ve passed your question to our balena engine maintainer, we will get back to you as soon as we can.

We will be looking at it in the coming months. We are interested in the DeviceRequests API ourselves… I will keep this updated as I make progress on it

Thanks @robertgzr. I’ve been speaking to @joehounsham confirming what you have said. I will keep following this topic and issue #188.

It has been a while, any updates on the subject?

Hi the relevant work is done: https://github.com/balena-os/balena-engine/tree/19.03-balena

I don’t have an ETA for inclusion in balenaOS, we will have to go through some testing since this is a major release of upstream docker

That’s great news. So considering 19.03 support how close are we to leverage Nvidia-docker?

We don’t have a timeline yet. For the moment, that feature is blocked by: https://github.com/balena-os/balena-engine/issues/188.

@Ereski, would this mean BalenaOS would include the CUDA drivers? @joe2612, we are finalising our DeepStream pipeline, would be great to get this working!

@remsol, there is no specific considerations regarding this as of yet, as the balenaEngine update is currently waiting on some groundwork in the OS.

I have looked into it a little bit and there seems to be a way to provide the drivers via a container: https://github.com/NVIDIA/nvidia-docker/wiki/Driver-containers-(Beta)
Which would enable you to start experimenting as soon as we release balenaOS with the new engine.

@Ereski seems there is a very promising PR going on in meta-tegra supporting nvidia-docker!

@remsol that is interesting indeed. Unfortunately this still might not allow us to ship the CUDA, etc blobs in the OS. One of the main problems as I currently understand it is more around licensing from Nvidia. BalenaOS is currently not allows to re-distribute the closed source binary blobs as users need to individually accept the EULA from Nvidia. This is why the recommended method is to install cuda, tensorrt, etc in the container user space. We have been exploring other options to improve this but currently the issues are more on the legal/partnership side rather than the technical side.

@shaunmulligan thanks for your answer. It’s indeed unfortunate that NVIDIA is distributing these packages through SDK manager. Therefore I understand that this will put Balena up against the wall.

Although i did give it some thought and a possible solution could be the suggestions in the wiki page of meta-tegra.

This will mean that we need to build the BalenaOS images ourselves from local build host after downloading the relevant packages through SDK manager. As long as we are able to onboard the device in BalenaCloud we are happy. Could you indicate if this could work out with BalenaOS or what the implications are?

This in theory would be possible but we would not support it and these binaries would be deleted after an OS upgrade. Operationally, we cannot support a fleet of devices provisioned to our platform that are incapable of updating their OS as this drastically slows us down making improvements to the platform.

I do understand this from the perspective of Balena. The only string left to pull is to contact NVIDIA as a member of the AI Inception program. However, I don’t think we do have more leverage than Balena does.

In order to get the relevant packages in the base image we are left with developing our own on top of meta-tegra. Please do keep us updated when things might change in the future.