Jetson (Nano) Working CSI Capture Container

Hi,

I had a working container that allowed me to capture images from a Raspberry Pi CSI camera

nvargus-daemon &
gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! 'video/x-raw(memory:NVMM), width=(int)1920,  height=(int)1080' ! nvvidconv ! nvjpegenc ! filesink location=test.jpeg

However, when I try it now, I’m getting strange errors.

nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available
Got EOS from element "pipeline0".
Execution ended after 0:00:00.127232917
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
(Argus) Error EndOfFile:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)

I’ve also tried with nvgstcapture-1.0 --automate --capture-auto --sensor-id=0 and get similar results.

I’ve tried the example image https://github.com/balena-io-playground/jetson-nano-sample-new but that doesn’t help.

I have confirmed by device-tree is setup properly, when used with Jetpack 4.6.1, the same command works as expected. I’m running a recently compiled BalenaOS image (balenaOS 2.88.4+rev17 with 4.9.253-l4t-r32.6).

Does anyone have a confirmed working setup that allows them to capture images from a CSI camera?

Thanks,

Andrew

Hello @smithandrewc apologizes for the delay answering! I was doing some housekeeping and i found your message on the forums!

@Langhalsdino do yo have any example of a container with CSI capture running on a Jetson Nano? That would be helpful here :slight_smile:

I got the PiCamera 2.1 working on the jetson Nano with balena about 2 years ago. Last working code snippets are about 1.2 years old. Therefore my code snippets might be a bit outdate. I am currently running the PiHQ camera (IMX477) on the Jetson NX with a heavily modified BalenaOS, so i can not provide recent code snippets on the topic.

Here are a few of my assumptions, please check them before copying any code:
I guess that @smithandrewc wants to do everything inside the container and not touch the BalenaOS. Furthermore i guess that you @smithandrewc are using the raspberry pi camera v2.1 (imx219) on the jetson Nano.

Here is a dump of my **very old** Dockerfile: We are probably running an outdated version of balenalib/jetson-nano-ubuntu:bionic - last commit by me was on the 22 Feb, 2020. Very deprecated!!!
FROM balenalib/jetson-nano-ubuntu:bionic

ENV DEBIAN_FRONTEND noninteractive

RUN adduser apic

ARG DRIVER_PACK="Jetson-210_Linux_R32.2.1_aarch64.tbz2"

###########################################################################
#                     Install L4T by NVIDIA                               #
###########################################################################

WORKDIR /usr/src/L4T
COPY ./L4T/$DRIVER_PACK .

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
                    bzip2 ca-certificates curl \
                    lbzip2 sudo htop curl && \
    apt-get install -y \
                    zip git \
                    python3 python3-pip python3-numpy \
                    cmake systemd && \
    tar -xpj --overwrite -f ./${DRIVER_PACK} && \
    sed -i '/.*tar -I lbzip2 -xpmf ${LDK_NV_TEGRA_DIR}\/config\.tbz2.*/c\tar -I lbzip2 -xpm --overwrite -f ${LDK_NV_TEGRA_DIR}\/config.tbz2' ./Linux_for_Tegra/apply_binaries.sh && \
    ./Linux_for_Tegra/apply_binaries.sh -r / && \
    rm -rf ./Linux_for_Tegra && \
    rm ./${DRIVER_PACK} && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* && \
    pip3 install jetson-stats

ENV LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra-egl:${LD_LIBRARY_PATH}

RUN ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.32.1.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so && \
    ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.32.1.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.1 && \
    ln -sf /usr/lib/aarch64-linux-gnu/tegra/libGL.so /usr/lib/aarch64-linux-gnu/libGL.so && \
    ln -s /usr/lib/aarch64-linux-gnu/libcuda.so /usr/lib/aarch64-linux-gnu/libcuda.so.1 && \
    ln -sf /usr/lib/aarch64-linux-gnu/tegra-egl/libEGL.so /usr/lib/aarch64-linux-gnu/libEGL.so

RUN ln -s /etc/nvpmodel/nvpmodel_t210_jetson-nano.conf /etc/nvpmodel.conf && \
    ln -s /etc/systemd/system/nvpmodel.service /etc/systemd/system/multi-user.target.wants/nvpmodel.service && \
    mkdir /var/lib/nvpmodel && \
    echo "/etc/nvpmodel.conf" > /var/lib/nvpmodel/conf_file_path

###########################################################################
#                         Install CUDA                                    #
###########################################################################

ARG CUDA_TOOLKIT="cuda-repo-l4t-10-0-local-10.0.326"
ARG CUDA_TOOLKIT_PKG="${CUDA_TOOLKIT}_1.0-1_arm64.deb"

WORKDIR /usr/src/CUDA
COPY ./CUDA/$CUDA_TOOLKIT_PKG .

RUN apt-get update && \
    apt-get install -y --no-install-recommends curl && \
    dpkg --force-all -i ${CUDA_TOOLKIT_PKG} && \
    rm ${CUDA_TOOLKIT_PKG} && \
    apt-key add var/cuda-repo-*-local*/*.pub && \
    apt-get update && \
    apt-get install -y --allow-downgrades cuda-toolkit-10-0 libgomp1 libfreeimage-dev libopenmpi-dev openmpi-bin && \
    dpkg --purge ${CUDA_TOOLKIT} && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

ENV CUDA_HOME=/usr/local/cuda
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
ENV PATH=$PATH:$CUDA_HOME/bin

###########################################################################
#                       Install CUDNN                                     #
###########################################################################

WORKDIR /usr/src/CUDNN

ARG CUDNN_VERSION="7.5.0.56"
ENV CUDNN_PKG_VERSION=${CUDNN_VERSION}-1
LABEL com.nvidia.cudnn.version="${CUDNN_VERSION}"

COPY ./CUDNN/libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb .
COPY ./CUDNN/libcudnn7-dev_$CUDNN_VERSION-1+cuda10.0_arm64.deb .
COPY ./CUDNN/libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb .

RUN dpkg -i libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb && \
    dpkg -i libcudnn7-dev_$CUDNN_VERSION-1+cuda10.0_arm64.deb && \
    dpkg -i libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb && \
    rm libcudnn7_$CUDNN_VERSION-1+cuda10.0_arm64.deb && \
    rm libcudnn7-dev_$CUDNN_VERSION-1+cuda10.0_arm64.deb && \
    rm libcudnn7-doc_$CUDNN_VERSION-1+cuda10.0_arm64.deb

###########################################################################
#                       Install NvInfer                                   #
###########################################################################

WORKDIR /usr/src/NvInfer

ARG INF_VERSION="5.1.6"
ENV INF_PKG_VERSION=${INF_VERSION}-1
LABEL com.nvidia.inf.version="${INF_VERSION}"

COPY ./NvInfer/libnvinfer-dev_$INF_VERSION-1+cuda10.0_arm64.deb .
COPY ./NvInfer/libnvinfer5_$INF_VERSION-1+cuda10.0_arm64.deb .
COPY ./NvInfer/libnvinfer-samples_$INF_VERSION-1+cuda10.0_all.deb  .
COPY ./NvInfer/python3-libnvinfer_$INF_VERSION-1+cuda10.0_arm64.deb .
COPY ./NvInfer/python3-libnvinfer-dev_$INF_VERSION-1+cuda10.0_arm64.deb .

RUN dpkg -i libnvinfer5_$INF_VERSION-1+cuda10.0_arm64.deb && \
    dpkg -i libnvinfer-dev_$INF_VERSION-1+cuda10.0_arm64.deb && \
    dpkg -i libnvinfer-samples_$INF_VERSION-1+cuda10.0_all.deb && \
    dpkg -i python3-libnvinfer_$INF_VERSION-1+cuda10.0_arm64.deb && \
    dpkg -i python3-libnvinfer-dev_$INF_VERSION-1+cuda10.0_arm64.deb && \
    rm libnvinfer5_$INF_VERSION-1+cuda10.0_arm64.deb && \
    rm libnvinfer-dev_$INF_VERSION-1+cuda10.0_arm64.deb && \
    rm libnvinfer-samples_$INF_VERSION-1+cuda10.0_all.deb && \
    rm python3-libnvinfer_$INF_VERSION-1+cuda10.0_arm64.deb && \
    rm python3-libnvinfer-dev_$INF_VERSION-1+cuda10.0_arm64.deb

###########################################################################
#                         Install TensorRT                                #
###########################################################################

WORKDIR /usr/src/TensorRT

ARG TRT_VERSION="5.1.6"
ENV TRT_VERSION_EXT 5.1.6.1
ENV TRT_PKG_VERSION=${INF_VERSION}-1
LABEL com.nvidia.inf.version="${INF_VERSION}"

COPY ./TensorRT/graphsurgeon-tf_$TRT_VERSION-1+cuda10.0_arm64.deb .
COPY ./TensorRT/tensorrt_$TRT_VERSION_EXT-1+cuda10.0_arm64.deb .
COPY ./TensorRT/uff-converter-tf_$TRT_VERSION-1+cuda10.0_arm64.deb .

RUN dpkg -i graphsurgeon-tf_$TRT_VERSION-1+cuda10.0_arm64.deb && \
    dpkg -i tensorrt_$TRT_VERSION_EXT-1+cuda10.0_arm64.deb && \
    dpkg -i uff-converter-tf_$TRT_VERSION-1+cuda10.0_arm64.deb && \
    rm graphsurgeon-tf_$TRT_VERSION-1+cuda10.0_arm64.deb && \
    rm tensorrt_$TRT_VERSION_EXT-1+cuda10.0_arm64.deb && \
    rm uff-converter-tf_$TRT_VERSION-1+cuda10.0_arm64.deb

###########################################################################
#                         Install OpenCV 4.x                              #
###########################################################################

WORKDIR /usr/src/OpenCV

ARG OPEN_CV_VERSION="4.2.0"

COPY ./OpenCV/cuda_gl_interop.h.patch .

RUN apt-get update && \
    apt-get install -y python3-protobuf python3-numpy python3-matplotlib && \
    apt-get install -y --no-install-recommends \
        build-essential lbzip2 make cmake g++ wget unzip pkg-config \
        libavcodec-dev libavformat-dev libavutil-dev libavresample-dev libswscale-dev libeigen3-dev libglew-dev libgstreamer1.0-0 libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev \
        gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools \
        gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-pulseaudio \
        xkb-data libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjpeg8-dev libjpeg-turbo8-dev libxine2-dev libdc1394-22-dev libv4l-dev \
        v4l-utils qv4l2 v4l2ucp libatlas-base-dev libopenblas-dev liblapack-dev liblapacke-dev gfortran libgtk2.0-dev && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

RUN wget https://github.com/opencv/opencv/archive/$OPEN_CV_VERSION.zip -O opencv-$OPEN_CV_VERSION.zip && unzip opencv-$OPEN_CV_VERSION.zip && rm opencv-$OPEN_CV_VERSION.zip && \
    wget https://github.com/opencv/opencv_contrib/archive/$OPEN_CV_VERSION.zip -O opencv_modules.$OPEN_CV_VERSION.zip && unzip opencv_modules.$OPEN_CV_VERSION.zip && rm opencv_modules.$OPEN_CV_VERSION.zip && \
    mkdir -p opencv-$OPEN_CV_VERSION/build

RUN patch -N /usr/local/cuda/include/cuda_gl_interop.h < cuda_gl_interop.h.patch && \
    rm cuda_gl_interop.h.patch && \
    cd opencv-$OPEN_CV_VERSION/build && \
    cmake --verbose \
        -D OPENCV_GENERATE_PKGCONFIG=ON \
        -D OPENCV_PC_FILE_NAME=opencv.pc \
        -D CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME \
        -D WITH_CUDA=ON \
        -D WITH_CUDNN=ON \
        -D OPENCV_DNN_CUDA=ON \
        -D CUDA_ARCH_BIN="5.3" \
        -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-$OPEN_CV_VERSION/modules \
        -D CUDA_ARCH_PTX="" \
        -D WITH_CUBLAS=ON \
        -D ENABLE_FAST_MATH=ON \
        -D CUDA_FAST_MATH=ON \
        -D ENABLE_NEON=ON \
        -D WITH_GSTREAMER=ON \
        -D WITH_LIBV4L=ON \
        -D BUILD_opencv_python2=OFF \
        -D BUILD_opencv_python3=ON \
        -D BUILD_TESTS=OFF \
        -D BUILD_PERF_TESTS=OFF \
        -D BUILD_SAMPLES=OFF \
        -D BUILD_EXAMPLES=OFF \
        -D CMAKE_BUILD_TYPE=RELEASE \
        -D WITH_GTK=OFF \
        -D WITH_QT=OFF \
        -D WITH_OPENGL=ON \
        -D BUILD_DOCS=OFF \
        -D CMAKE_INSTALL_PREFIX=/usr/local .. && \
    make -j8 && \
    make install && \
    cp unix-install/opencv.pc /usr/local/lib/pkgconfig && \
    rm -rf /usr/src/OpenCV/opencv-$OPEN_CV_VERSION && \
    rm -rf /usr/src/OpenCV/opencv_contrib-$OPEN_CV_VERSION && \
    ldconfig

###########################################################################
#                       Install TensorFlow                                #
###########################################################################

WORKDIR /usr/src/TensorFlow
ARG TF_VERSION="2.0.0rc0"

COPY ./TensorFlow/tensorflow-${TF_VERSION}-cp36-cp36m-linux_aarch64.whl .

# Install TF 2.1 dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
                      libtool pkg-config build-essential \
                      autoconf automake libffi-dev \
                      libhdf5-dev python3-h5py && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

RUN pip3 install \
          h5py==2.10.0 \
          numpy==1.16.0 \
          setuptools==41.0.0 \
          cython==0.29.14

# Install Tensorflow
RUN pip3 install tensorflow-${TF_VERSION}-cp36-cp36m-linux_aarch64.whl

###########################################################################
#                            Setup Entrypoint                             #
###########################################################################

WORKDIR /usr/src/app

COPY ./entrypoint.sh .

ENTRYPOINT ["./entrypoint.sh"]

One thing i clearly remember is, that the nvargus-deamon needs to be running in the background. I therefore used the following script for testing:

nvargus-daemon &
gst-launch-1.0 nvarguscamerasrc

#Lounch with fake sink
gst-launch-1.0 -v nvarguscamerasrc ! nvvidconv flip-method=0 ! 'video/x-raw, width=1280, height=720, format=BGRx' ! videoconvert ! 'video/x-raw, format=BGR' ! identity silent=false ! fakesink -e

Furthermore exporting the LD_LIBRARY_PATH, CUDA_HOME and PATH was neccesary two years ago :see_no_evil:

export LD_LIBRARY_PATH="/usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra-egl::/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
export CUDA_HOME=/usr/local/cuda
export PATH=$PATH:$CUDA_HOME/bin

Overall i think your error message indicates that the nvargus-daemon is not running. Could you check that the deamon is running?

Any how using the CSI connector with the IMX219 (part of the kernel) was straight forward and should still be working. Only the nvargus-daemon and the nvidia software side is sometime a bit difficult with strange or misleading error messages.

If you are using the PiCamera HQ (IMX477) things are going to get a bit more difficult. :see_no_evil:

1 Like

@smithandrewc what jetPack version are you running? I remember from our transition to the Jetson NX, that we could not get nvjpegenc or nvpngenc to run.

does this provide you with images?

# not miss to run nvargus-daemon & in background
import cv2

def gstreamer_pipeline(
    capture_width=1280,
    capture_height=720,
    display_width=1280,
    display_height=720,
    framerate=60,
    flip_method=0,
):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink"
        % (
            capture_width,
            capture_height,
            framerate,
            flip_method,
            display_width,
            display_height,
        )
    )

cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
cap.isOpened()

success, image = cap.read()
count = 0
while success:
  cv2.imwrite("frame%d.jpg" % count, image)     # save frame as JPEG file
  success,image = cap.read()
  print ('Read a new frame: ', success)
  count += 1
1 Like