Deploying Nvidia NGC Docker containers

Hello,

We would like to use balena to deploy to a fleet of Jetson Nano boards. Due to the nature of our use case, we would like to leverage Nvidia’s DeepStream SDK to accelerate development. They offer pre-built Docker containers (NGC containers) containing the required libraries and hardware acceleration support.

However, these containers should be run using the Nvidia docker runtime (see https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson), as depicted in the figure shown below.

Will this functionality be supported in the near future?

1 Like

Hi,
Unfortunately we do not have plans to support nvidia docker runtime, but I am sure you can run the stack without that. It would just require a bit more work as it will not work “out of the box”.
Kind regards,
Theodor

Hello telphan,

Can you be a bit more specific? For example what do you mean “you can run the stack without that.” ?
or “It would just require a bit more work as it will not work “out of the box” ?

Thank you a lot!
~yiorgos

I think what was meant is that it may be possible to perform the “setting up actions” of the following repositories as part of your own Dockerfiles in your app containers:

When we look at the README files and the source code of those repos, we see that they run scripts and commands that automate a number of configuration options of app containers. In theory, even if Nvidia had included some hardware configuration code in their own implementation of Docker or surrounding components (like runc), for as long as it is open source, it would be possible to identify and isolate the code, place it in separate executables and run that inside privileged app containers of yours on balenaOS. Having said that, there’s certainly a fair amount of work to achieve it.

1 Like