Using Ultralytics jetson4 image on Jetson Nano

Hi all,

This post is a sequal to my previous one Running Yolov8n on Jetson Nano with TensorRt - Product support - balenaForums.

I submitted the docker image to ultralytics, and it has been accepted. The docker image they have, should be working on Jetson nano, but I couldn’t manage to make it run. The problem is mounting NVIDIA runtime to the docker compose. Is there a way to have a workaround to run the new docker image on a balena device?

These are my testing scripts

# docker-compose.yml
version: '2.4'
services:
  yolo_env:
    build: .
    ipc: host
    runtime: nvidia
# Dockerfile
# Use the base image from Ultralytics for YOLO on Jetson Nano
FROM ultralytics/ultralytics:8.2.48-jetson-jetpack4

# Install wget
RUN apt-get update && apt-get install -y wget

# Set the working directory
WORKDIR /app

# Copy the Python script into the container
COPY check_versions.py /app

# Command to run when the container starts
CMD ["python3", "check_versions.py"]
# check_version.py
import os
import torch
import tensorrt as trt

print("Checking installed versions:")
print("Python Version:", os.sys.version)
print("PyTorch Version:", torch.__version__)
print("TensorRT Version:", trt.__version__)
print("CUDA Version:", torch.version.cuda)

Regards,

1 Like

Hello @MWLC

You are right mounting the NVIDIA runtime to the docker compose is tricky. Maybe you can get some ideas in this blogpost → Using NVIDIA Jetson NGC containers on balenaOS - balena Blog

In the other hand, could you please confirm the exact hardware that you are testing? I would like to try to replicate here!

Hi @mpous, thanks for the guide. I manged to solve the problem after adding the BSP package to my docker file. This what I have now:

FROM ultralytics/ultralytics:8.2.48-jetson-jetpack4

# Don't prompt with any configuration questions
ENV DEBIAN_FRONTEND noninteractive

# Install some utils
RUN apt-get update && apt-get install -y lbzip2 git wget unzip jq xorg tar python3 libegl1 binutils xz-utils bzip2

ENV UDEV=1

# Download and install BSP binaries for L4T 32.7.3, note new nonstandard URL
RUN \
  cd /tmp/ && wget https://developer.nvidia.com/downloads/remetpack-463r32releasev73t210jetson-210linur3273aarch64tbz2 && \
  tar xf remetpack-463r32releasev73t210jetson-210linur3273aarch64tbz2 && rm remetpack-463r32releasev73t210jetson-210linur3273aarch64tbz2 && \
  cd Linux_for_Tegra && \
  sed -i 's/config.tbz2\"/config.tbz2\" --exclude=etc\/hosts --exclude=etc\/hostname/g' apply_binaries.sh && \
  sed -i 's/install --owner=root --group=root \"${QEMU_BIN}\" \"${L4T_ROOTFS_DIR}\/usr\/bin\/\"/#install --owner=root --group=root \"${QEMU_BIN}\" \"${L4T_ROOTFS_DIR}\/usr\/bin\/\"/g' nv_tegra/nv-apply-debs.sh && \
  sed -i 's/chroot . \//  /g' nv_tegra/nv-apply-debs.sh && \
  ./apply_binaries.sh -r / --target-overlay && cd .. \
  rm -rf Linux_for_Tegra && \
  echo "/usr/lib/aarch64-linux-gnu/tegra" > /etc/ld.so.conf.d/nvidia-tegra.conf && ldconfig

# Set the working directory
WORKDIR /app

# Copy the Python script into the container
COPY check_versions.py /app

# Command to run when the container starts
CMD ["python3", "check_versions.py"]

Would you mind to explain to me, why these libraries lbzip2 jq xorg libegl1 binutils xz-utils bzip2 are included in all of the docker image in the balena-io-experimental such as balena-jetson-catalog/tensorrt/Dockerfile.jetson-nano ?

In the other hand, could you please confirm the exact hardware that you are testing?

Here are the hardware details:
TYPE: Nvidia Jetson Nano SD-CARD
HOST OS VERSION: balenaOS 4.0.9+rev2
CURRENT RELEASE == TARGET RELEASE : DEVELOPMENT
SUPERVISOR VERSION: 16.1.0

1 Like

Hello, the libraries you mentioned, such as lbzip2 jq xorg libegl1 binutils xz-utils bzip2 are some standard utilities that we have traditionally used in our Jetson examples. Not every example may need all of those utilities, but it covers many use cases well. In your example, at the very least I think the zip utilities are required. You could do some trial and error by removing them one by one and seeing if your Dockerfile sill builds, but unless you really need the space, you can probably just keep them as is.

1 Like

Thanks for the clarification, @alanb128. Have a good day

1 Like