Hi All, We are currently deploying a PyTorch-based AI application to Jetson Orin devices managed by BalenaOS. We’ve run into a significant container incompatibility issue and wanted to ask for your guidance on the recommended path forward.Our goal was to use Python 3.12, but we encountered a series of cascading incompatibilities:
-
NVIDIA Image Failure: Using a standard NVIDIA image (like
nvcr.io/nvidia/pytorch:25.06-py3-igpu) failed, as it’s not compatible with BalenaOS’s specific driver provisioning model for the Jetson GPU. -
Custom Build Failure: We then tried building on a Balena-compatible base image (https://github.com/balena-io-examples/jetson-examples/blob/master/jetson-orin/Dockerfile) and manually installing Python 3.12 on top of it. This failed because there are no official, Jetson-specific PyTorch wheels available for Python 3.12. The build was forced to use a generic
aarch64wheel, which couldn’t access the Jetson GPU and crashed.
Our short-term resolution has been to revert our application’s dependencies to Python 3.10 to remain compatible with the base image and the available Jetson-specific PyTorch wheels.
This leads to our questions for your team:
-
Is reverting to Python 3.10 (and its compatible PyTorch wheel) the current, officially recommended path for deploying GPU-accelerated PyTorch apps on BalenaOS for Jetson Orin? We ask as Python 3.10 reaches its end-of-life in October 2026, so we are keen to understand the future-proof path.
-
Are there any plans for a Balena-compatible base image for Jetson Orin that will officially support Python 3.12 and the necessary hardware-accelerated libraries?
-
If we have a hard requirement for Python 3.12, would you recommend we build PyTorch from source inside our container? If so, are there any Balena-specific guides or best practices for this, especially concerning the driver interactions and build environment?
Thank you for your time and any guidance you can provide.