Please use the thread below to discuss the related blog post:
Hi @andrewnhem, thanks for making available this supportive blog post. I am having an issue to use L4t-pytorch nvidia container in Dockerfile that I push to BalenaOS of my Nvidia device. For my app, I need to use GPU based Pytorch. Therefore, I need cuda toolkit running on the device. However, it seems that there is an issue related to that on BalenaOS. I guess there are lack of required Nvidia tools in it. Can you please guide me on this?
Have there been any updates to the process for deploying CUDA and OpenCV to a Jetson TX2 (or even for the Nano)? I’m unable to make this procedure work with my Jetson TX2 and CTI Astro carrier board. I suspect I have a mismatch between the Nvidia drivers in balenaOS and the version of L4T and/or CUDA available in the balenaOS repository.
What versions are you trying to install? Thanks!