Jetson: Support Nvidia Docker Images

Hey @nmaas87, thanks very much for your quick reply and help.

In my case, I need GPU enabled Pytorch. As you know, installing GPU enabled Pytorch is not straightforward. Therefore, I wanted to benefit from Nvidia Pytorch container in my Dockerfile. However, I couldn’t use it for my Nvidia Jetson Nano device having BalenaOS(v2.82.11+rev11). In essence, I am quite bit confused about the containerization of my app requiring to use GPU enabled Pytorch and Cuda in order to manage my deployment into the device having BalenaOS over Balena Cloud. That is my Dockerfile:

FROM nvcr.io/nvidia/l4t-pytorch:r32.5.0-pth1.7-py3

ENV PYTHONUNBUFFERED 1
ENV DEBIAN_FRONTEND noninteractive
ENV OPENBLAS_CORETYPE ARMV8
ENV UDEV 1

RUN apt-get update && apt-get upgrade -y && apt-get install gcc -y && apt-get install apt-utils -y \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

ENV PROGRAM_DIR=/app

RUN mkdir $PROGRAM_DIR
WORKDIR $PROGRAM_DIR

COPY requirements.txt /tmp

RUN pip3 install --upgrade pip
RUN pip3 install -r /tmp/requirements.txt

COPY . $PROGRAM_DIR

RUN python3 -m pip list

CMD python3 $PROGRAM_DIR/example.py

If you don’t mind, I have several questions below:

  1. Can we use the containers provided by Nvidia for the dockerfile on BalenaOS? I wanted to use the container below on the first line on my Dockerfile provided by Nvidia since that container have all the libraries I need such as GPU enabled PyTorch, Cuda toolkit or etc.
FROM nvcr.io/nvidia/l4t-pytorch:r32.5.0-pth1.7-py3

However, even if I can build the Dockerfile, I am getting the error on Balena Cloud.

OSError: libcurand.so.10: cannot open shared object file: No such file or directory

I have been researching the error for a week. I encountered the Nvidia blog post below in which the same error was reported. So far, I can say that the error is related to that Dockerfile cannot completely configure the relation between the Nvidia container and BalenaOS on the device if I am not wrong. I guess that I am supposed to set my Docker daemon’s default runtime to Nvidia. However, there is no daemon.json file in my device running BalenaOS.

https://forums.developer.nvidia.com/t/docker-build-on-jetson-xavier/185118

I guess that nvidia-container-runtime should be required to use the Nvidia container and daemon.json in it should be configured. Therefore, I tried to install nvidia-container-runtime since my device have docker version 19.03.23 and L4T version R.32.5.0. However, BalenaOs is not allowing me to install this tool. I also researched why it is happening. To https://nvidia.github.io/nvidia-container-runtime/ repository, the supported distributions of Nvidia Container Runtime are listed. If I am not wrong, BalenaOs is Yocto Linux based hostOS. I couldnt see BalenaOS in this list. Overall, I couldn’t go ahead on the usage of Nvidia containers.

Can you please guide me on this if I am missing anything? Can I use Nvidia containers on Dockerfile for BalenaOS somehow?

  1. If I cannot use Nvidia containers, am I supposed to use only Balenalib containers such as the one below in the tutorial you suggested? If so, what would you suggest me about how I can build my app requiring GPU enabled Pytorch in Dockerfile for my device having BalenaOS?
balenalib/jetson-nano-ubuntu:bionic

I would be so grateful if you help me with this

Hi there,
sadly I cannot help you much there as I am a volunteer and just checking in from my lunch break of my real job. However, I want to put emphasize on the shared repo, again.

Regarding 1.) balenaEngine does not poses the nvidia-container-runtime you’re looking for and cannot load any external modules. As far as I know, one of the reasons why balenaEngine is so sleak and fast compared to the “original” Docker Engine, is that has no plugin system anymore. However, it should not be necessary to use this runtime, as its probably just binding A LOT of folders/files in the background as mentioned earlier. So I wold not try with the NVIDIA containers, which probably would need something specific which you only could get running with a lot of leg work.

2.) Coming to this point, as I said earlier, this repo GitHub - balena-io-playground/jetson-nano-sample-new: Jetson Nano sample using new packages contains a fully working example building and running a balena Compatible container with a) CUDA and b) OPENCV support. There is also a chance that pytorch might be installed in one of them. Both the docker-compose file and the Dockerfiles are completely open. In your case, I would just run the example, see if there is there what you need and work with this for your project. Probably my efficient that trying to hammer the NVIDIA containers until they fit.

I think we that, you should be coming into some functional prototype within a short amount of time :slight_smile:

Cheers

2 Likes

Catching up on this thread, thanks @nmaas87 for the assistance here, and your notes and explanation are exactly what I would recommend to @aktaseren as well. The nvidia runtime is not available from balenaOS, so everything you need for your application is going to need to be installed in your container. Our base images and that example repo have the best starting point, CUDA and/or OpenCV container example might indeed pull in PyTorch, I am not entirely sure … but you could certainly take that template and expand upon it to get it in there. Thanks!

2 Likes

Guys @nmaas87 @dtischler, thanks a lot for the brainstorm. Your suggestions actually helped me a lot. Similar to the guide repo you suggested, I encountered Balena Hub which consists of full of projects realized. Some of them are similar to what I am doing with Nvidia devices.

For example,
ROS2-Pose estimation example project was done with the usage of libraries Cuda+PyTorch+OpenCV. It is a little complex but I tried it and it is working very well.

Thanks a lot again.

2 Likes

I forgot about that repo, excellent, glad it helped. That is a rather full-featured example, so you might need to trim down a bit for your specific use-case, but either way that is an excellent starting point yep!

@dtischler

What do I have to do to get the nvidia runtime working? Any idea to solve this issue?

Cheers.

As already told, the nvidia runtime is a proprietary piece of software extension / plugin which loads into the Docker Engine. It cannot be loaded into balenaEngine.
For your projects, I would suggest working without it using the example projects as given already in this thread.

Thanks!

@nmaas87
Seems you have misunderstood.

We can use GPU on the balena services, but the intent is to install docker inside a balena service and start a container with GPU supported.

We were able to get this running on AMD boards.

Have you ever tried this on your Jetson board?

  • Prepare a balena service.
  • Install docker inside it.
  • Run a container with nvidia runtime enabled.
  • Check if you can detect GPU without any issue.

Cheers!

Hi @scarlyon - yes I definitely did not get this usecase - and as a matter of fact its the first time I got such a question :slight_smile: .

I have not tried it and do not know whether Docker Engine will run in the balena Engine in a ARM64 board. What I would suspect would be that even if this worked, there might be problems with getting GPU support out of the nested container solution, as the nvidia runtime is binding them probably differently than the balenaEngine.

Also, all containers enabling CUDA or similar features tend to be big in size due to all the dependencies necessary. Adding Docker on top of all this and nesting the real containers in a contanier + engine will increase the hardware demands, slow down update and build cycles and nearly render you blind (logwise) as all the features in balenaCloud or openBalena to monitor your containers are basically separated from the container by another layer of abstraction. I am guessing this will also lead to degradation of performance and serviceability of the overall system, hence I would strongly advise against it. I can understand that one would want to reuse existing images as far as possible, but in this case getting this to work might end up costing you more time and headache down the line than going down the proper way from the start.

But I am just an Ambassador and this is my private opinion, maybe the balena guys like @dtischler think differently about my points :slight_smile:

1 Like

Thanks for your reply, mate.

But we have a specific use-case where the GPU should be used on a container that is running inside a balena service. Absolutely agree with your opinion, but anyway, we have to go this way… :slight_smile:

Cheers,
Shane.

1 Like

@dtischler @jakogut @alanb128

Just checking if there is any update on this issue?

Sorry for the delayed reply on this one @scarlyon – but unfortunately no news to report. :expressionless:

Hi everyone,

i managed to build a version that supports nvidia container toolkit and can run the ml l4t container of nvidia with gpu, cuda and gstreamer camera support.
It was quite a hassle and i am still working on cleaning everything up till i publish my changes. Just in case some one is really in need of this, here are my changes as git patch:

changes.txt (46.8 KB)

There might be some bugs in the patch, since i remove the imx477 and isp support that i am currently working on adding to the image. I will be back in the next month with a modified fork. I am happy to help balena get this into their master. But i could imagine that licensing will probably prevent this.

1 Like

@Langhalsdino

Sounds awesome!

Could you share the details of what you did to get it working?

Looking forward to your reply soon!

Cheers,
Shane.

Hi @scarlyon ,

i would recommend you to wait for the cleaned up repo, that i am planing to publish. Anyhow my notes might still be interesting to some. Specially the google bot :slight_smile:

Please take these notes with caution, since they are incomplete and are just a documentation of my debugging process.

Here are my notes
  • get it building

    #poky/meta/classes/go.bbclass
    
    remove trimpath
    
  • Just adding nvidia-container-toolkit layers/meta-balena-jetson/recipes-core/images/balena-image.inc

    IMAGE_INSTALL:append:jetson-xavier-nx-devkit = " \
        tegra194-nxde-sdcard-flash \
        fan-startup \
        parted \
        gptfdisk \
        tegra-nvpmodel \
        tegra-configs-nvstartup \
        tegra-configs-udev \
        mtd-utils \
        tegra-bluetooth \
        tegra-wifi \
        tegra-firmware-rtl8822 \
        tegra-udrm-probeconf \
        linux-firmware-bcm4354 \
        tegra-firmware-xusb \
        cuda-driver \
        tegra-libraries \
        libnvidia-container-tools \
        go-runtime \
        nvidia-container-toolkit \
        tegra-argus-daemon \
        libvisionworks-sfm \
        libvisionworks-tracking \
    "
    
    balena run -it --gpus all nvcr.io/nvidia/l4t-ml:r32.5.0-py3 bash
    
    python3
    >>> import tensorflow as tf
    2021-12-02 14:46:19.410548: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.10.2'; dlerror: libcudart.so.10.2: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda-10.2/targets/aarch64-linux/lib:
    2021-12-02 14:46:19.410673: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    2021-12-02 14:46:19.410987: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.10.2'; dlerror: libcudart.so.10.2: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda-10.2/targets/aarch64-linux/lib:
    2021-12-02 14:46:19.411033: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
    Segmentation fault (core dumped)
    

    ⇒ gpu flag works, but no cuda available

  • exteneded with cudnn, … layers/meta-balena-jetson/recipes-core/images/balena-image.inc

    # Heavily increased to ~8GB
    IMAGE_ROOTFS_SIZE = "3899392"
    
    # Bootloader blob is 32MB on the NX
    IMAGE_ROOTFS_SIZE:jetson-xavier-nx-devkit-emmc = "2932736"
    IMAGE_ROOTFS_SIZE:jetson-xavier-nx-devkit = "2932736"
    
    IMAGE_INSTALL:append:jetson-xavier-nx-devkit = " \
        tegra194-nxde-sdcard-flash \
        fan-startup \
        parted \
        gptfdisk \
        tegra-nvpmodel \
        tegra-configs-nvstartup \
        tegra-configs-udev \
        mtd-utils \
        tegra-bluetooth \
        tegra-wifi \
        tegra-firmware-rtl8822 \
        tegra-udrm-probeconf \
        linux-firmware-bcm4354 \
        tegra-firmware-xusb \
        cuda-driver \
        tegra-libraries \
        libnvidia-container-tools \
        go-runtime \
        nvidia-container-toolkit \
        tegra-argus-daemon \
        cuda-toolkit \
        cudnn \
        libvisionworks \
        libvisionworks-sfm \
        libvisionworks-tracking \
        cuda-libraries \
    "
    

    → Works but library mapping missmatch in l4t.csv, …

  • try to upgrade to newer meta-tegra and corrected csv

    balena run -it --gpus all nvcr.io/nvidia/l4t-ml:r32.6.1-py3 bash
    

    Copied: recipes-containers from meta tegra honister and removed docker and virtualization. Furthermore removed -stripath arg of go build.

    IMAGE_INSTALL:append:jetson-xavier-nx-devkit = " \
    		tegra194-nxde-sdcard-flash \
        fan-startup \
        parted \
        gptfdisk \
        tegra-nvpmodel \
        tegra-configs-nvstartup \
        tegra-configs-udev \
        mtd-utils \
        tegra-bluetooth \
        tegra-wifi \
        tegra-firmware-rtl8822 \
        tegra-udrm-probeconf \
        linux-firmware-bcm4354 \
        tegra-firmware-xusb \
        cuda-driver \
        tegra-libraries-core \
        tegra-libraries-camera \
        tegra-libraries-cuda \
        tegra-libraries-eglcore \
        tegra-libraries-gbm \
        tegra-libraries-glescore \
        tegra-libraries-multimedia \
        tegra-libraries-multimedia-utils \
        tegra-libraries-multimedia-v4l \
        libnvidia-container-tools \
        go-runtime \
        nvidia-container-toolkit \
        nvidia-container-runtime \
        tegra-argus-daemon \
        tegra-tools-tegrastats \
        cuda-toolkit \
        cuda-libraries \
        cudnn \
        libvisionworks \
        libvisionworks-container-csv \ 
        libvisionworks-sfm \
        libvisionworks-sfm-container-csv \
        libvisionworks-tracking \
        libvisionworks-tracking-container-csv
        vim \
    
    
    

    fixing cuda.csv, cudnn.csv references to /usr/lib/ path

  • check l4t.csv links

    cat /etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv | while read line; do file=$(echo $line | sed "s/.*, //g"); if [ -f $file ]; then echo $line >> working.txt; fi; done
    
    cat /etc/nvidia-container-runtime/host-files-for-container.d/l4t.csv | while read line; do file=$(echo $line | sed "s/.*, //g"); if [ ! -f $file ]; then echo $line >> not_working.txt; fi; done
    
  • add nvargus support, …

    balena run -it --ipc=host -v /tmp/argus_socket:/tmp/argus_socket --cap-add SYS_PTRACE --device /dev/video0:/dev/video0 --gpus all nvcr.io/nvidia/l4t-ml:r32.6.1-py3 bash
    
    # remove nvargus support
    # layers/meta-balena-jetson/conf/layer.conf
    
    -- BBMASK += "/meta-tegra/recipes-multimedia/gstreamer/"
    
    IMAGE_INSTALL:append:jetson-xavier-nx-devkit = " \
        tegra194-nxde-sdcard-flash \
        fan-startup \
        parted \
        gptfdisk \
        tegra-nvpmodel \
        tegra-configs-nvstartup \
        tegra-configs-udev \
        mtd-utils \
        tegra-bluetooth \
        tegra-wifi \
        tegra-firmware-rtl8822 \
        tegra-udrm-probeconf \
        linux-firmware-bcm4354 \
        tegra-firmware-xusb \
        cuda-driver \
        tegra-libraries-core \
        tegra-libraries-camera \
        tegra-libraries-cuda \
        tegra-libraries-eglcore \
        tegra-libraries-gbm \
        tegra-libraries-glescore \
        tegra-libraries-multimedia \
        tegra-libraries-multimedia-utils \
        tegra-libraries-multimedia-v4l \
        libnvidia-container-tools \
        go-runtime \
        nvidia-container-toolkit \
        nvidia-container-runtime \
        tegra-argus-daemon \
        tegra-tools-tegrastats \
        cuda-toolkit \
        cuda-libraries \
        cudnn \
        libvisionworks \
        libvisionworks-container-csv \ 
        libvisionworks-sfm \
        libvisionworks-sfm-container-csv \
        libvisionworks-tracking \
        libvisionworks-tracking-container-csv \
        tegra-mmapi \
        tegra-mmapi-dev \
        gstreamer1.0 \
        gstreamer1.0-plugins-base \
    		gstreamer1.0-plugins-tegra \
        isp-config \
        vim \
    "
    
    # try with
    apt update
    apt-get install software-properties-common build-essential -y
    add-apt-repository ppa:ubuntu-toolchain-r/test
    
    # press enter
    
    apt update
    apt-get install software-properties-common -y 
    apt install gcc-9 gcc-10 -y 
    apt dist-upgrade -y
    
    # Validate existance of GLIBCXX_3.4.29
    strings /usr/lib/aarch64-linux-gnu/libstdc++.so.6 | grep GLIBCXX
    
    GST_DEBUG=4 gst-inspect-1.0  /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so
    
    apt install gtk-doc-tools
    git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad.git
    cd gst-plugins-bad/
    git checkout 1.14.5
    ./autogen.sh
    make
    
    gst-launch-1.0 -e nvcamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=rpi_v3_imx477_cam0.mp4
    
    layers/poky/meta/conf/distro/include/tcmode-default.inc
    -> set GLIBCVERSION ?= "2.27"
    
    /home/bombus/langhalsdino/yocto/balena-jetson/balena-jetson-nx/layers/poky/meta/conf/distro/include/yocto-uninative.inc
    -> set UNINATIVE_MAXGLIBCVERSION = "2.27"
    

    Issue logs

    GST_DEBUG=4 gst-inspect-1.0  /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so
    
    (gst-plugin-scanner:13): GStreamer-WARNING **: 05:47:19.297: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvv4l2camerasrc.so': /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/lib/aarch64-linux-gnu/libv4l2.so.0)
    0:00:00.828793344    12   0x55aa605a00 INFO            GST_REGISTRY gstregistry.c:1694:scan_and_update_registry: Registry cache changed. Writing new registry cache
    0:00:00.828921568    12   0x55aa605a00 INFO            GST_REGISTRY gstregistrybinary.c:369:priv_gst_registry_binary_write_cache: Building binary registry cache image
    0:00:00.863567584    12   0x55aa605a00 INFO            GST_REGISTRY gstregistrybinary.c:401:priv_gst_registry_binary_write_cache: Writing binary registry cache
    0:00:00.926758272    12   0x55aa605a00 INFO            GST_REGISTRY gstregistrybinary.c:262:gst_registry_binary_cache_finish: Wrote binary registry cache
    0:00:00.926833536    12   0x55aa605a00 INFO            GST_REGISTRY gstregistry.c:1703:scan_and_update_registry: Registry cache written successfully
    0:00:00.926885760    12   0x55aa605a00 INFO            GST_REGISTRY gstregistry.c:1762:ensure_current_registry: registry reading and updating done, result = 1
    0:00:00.927033888    12   0x55aa605a00 INFO                GST_INIT gst.c:807:init_post: GLib runtime version: 2.56.4
    0:00:00.927076512    12   0x55aa605a00 INFO                GST_INIT gst.c:809:init_post: GLib headers version: 2.56.4
    0:00:00.927105248    12   0x55aa605a00 INFO                GST_INIT gst.c:810:init_post: initialized GStreamer successfully
    0:00:00.935838144    12   0x55aa605a00 WARN      GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so)
    
    (gst-inspect-1.0:12): GStreamer-WARNING **: 05:47:19.427: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so': /usr/lib/aarch64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so)
    Could not load plugin file: Opening module failed: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so)
    

    after installing cpp, …

    GST_DEBUG=4 gst-inspect-1.0  /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so
    
    0:00:00.240528064  6364   0x558fd6ba00 WARN      GST_PLUGIN_LOADING gstplugin.c:792:_priv_gst_plugin_load_file_for_registry: module_open failed: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so)
    
    (gst-inspect-1.0:6364): GStreamer-WARNING **: 05:53:23.725: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so': /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so)
    Could not load plugin file: Opening module failed: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so)
    

    Quick fix (copy from nvidia base image):

    mv /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so.bckp
    cp /home/root/libgstnvarguscamerasrc.so /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so
    
    0:00:05.615482720    12   0x55aa716a00 INFO      GST_PLUGIN_LOADING gstplugin.c:901:_priv_gst_plugin_load_file_for_registry: plugin "/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so" loaded
    Plugin Details:
      Name                     nvarguscamerasrc
      Description              nVidia ARGUS Source Component
      Filename                 /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvarguscamerasrc.so
      Version                  1.0.0
      License                  Proprietary
      Source module            nvarguscamerasrc
      Binary package           NvARGUSCameraSrc
      Origin URL               http://nvidia.com/
    
      nvarguscamerasrc: NvArgusCameraSrc
    
      1 features:
      +-- 1 elements
    
    root@1ec26beb1dde:/# gst-launch-1.0 -e nvcamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=rpi_v3_imx477_cam0.mp4
    WARNING: erroneous pipeline: no element "nvcamerasrc"
    root@1ec26beb1dde:/# gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=rpi_v3_imx477_cam0.mp4
    nvbuf_utils: Could not get EGL display connection
    nvbuf_utils: ERROR getting proc addr of eglCreateImageKHR
    nvbuf_utils: ERROR getting proc addr of eglDestroyImageKHR
    WARNING: erroneous pipeline: no element "nvv4l2h264enc"
    root@1ec26beb1dde:/# ls
    bin  boot  dev  dst  etc  home  lib  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
    root@1ec26beb1dde:/# gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! omxh264enc ! h264parse ! mp4mux ! filesink location=rpi_v3_imx477_cam0.mp4
    nvbuf_utils: Could not get EGL display connection
    nvbuf_utils: ERROR getting proc addr of eglCreateImageKHR
    nvbuf_utils: ERROR getting proc addr of eglDestroyImageKHR
    WARNING: erroneous pipeline: no element "omxh264enc"
    

    further quick fix idea:

    copy entire gstreamer plugin folder
    
    # funktioniert im host os
    GST_DEBUG=4 gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! fakesink
    
    # funktioniert nicht im container
    GST_DEBUG=4 gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! fakesink
    
    (Argus) Error NotSupported: EXT_platform_device extension missing (in src/eglutils/EGLUtils.cpp, function getDefaultDisplay(), line 44)
    (Argus) Error InvalidState: Failed to load EGL library (in src/eglutils/EGLUtils.cpp, function exports(), line 213)
    (Argus) Error InvalidState: Failed to get EGL API access function (in src/eglutils/EGLUtils.cpp, function exports(), line 224)
    Caught SIGSEGV
    
    # host os
    root@1ec26beb1dde:/# gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=rpi_v3_imx477_cam0.mp4
    -> WARNING: erroneous pipeline: no element "h264parse"
    "The h264parse element is part of the gstreamer1.0-plugins-bad package, which seems to not be installed by default."
    
    # fixed by adding missing binding
    lib, /usr/lib/libgstnvegl-1.0.so.0
    lib, /usr/lib/aarch64-linux-gnu/tegra-egl/libEGL_nvidia.so.0
    lib, /usr/lib/libnvdecode2eglimage.so
    lib, /usr/lib/libnveglstream_camconsumer.so
    lib, /usr/lib/libnveglstreamproducer.so
    lib, /usr/lib/libnvidia-eglcore.so.32.6.1
    lib, /usr/lib/libGLESv1_CM_nvidia.so.1
    lib, /usr/lib/libGLESv2.so.2.1.0
    
    # try jpg
    gst-launch-1.0 -e nvarguscamerasrc num-buffers=20 sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! nvjpegenc ! multifilesink location=%03d_rpi_v3_imx477_cam0.jpeg
    

    Next step issue logs:

    # fake sink works but no other sinks 
    
    gst-launch-1.0 -e nvarguscamerasrc sensor-id=0 ! "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=40/1" ! nvv4l2h264enc ! h264parse ! mp4mux ! filesink location=rpi_v3_imx477_cam0.mp4
    
    # fixed by copying libs from official image
    mv /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so.bckp
    cp libgstnvarguscamerasrc.so /usr/lib/gstreamer-1.0/libgstnvarguscamerasrc.so
    
    mv /usr/lib/gstreamer-1.0/libgstnvv4l2camerasrc.so /usr/lib/gstreamer-1.0/libgstnvv4l2camerasrc.so.bckp
    cp libgstnvv4l2camerasrc.so /usr/lib/gstreamer-1.0/libgstnvv4l2camerasrc.so
    
    mv /usr/lib/libv4l2.so.0 /usr/lib/libv4l2.so.0.bckp
    mv /usr/lib/libv4l2.so.0.0.0 /usr/lib/libv4l2.so.0.0.0.bckp
    cp libv4l2.so.0.0.0 /usr/lib/libv4l2.so.0.0.0
    
    mv /usr/lib/libv4l2rds.so.0 /usr/lib/libv4l2rds.so.0.bckp
    mv /usr/lib/libv4l2rds.so.0.0.0 /usr/lib/libv4l2rds.so.0.0.0.bckp
    cp libv4l2rds.so.0.0.0 /usr/lib/libv4l2rds.so.0.0.0
    
    ln -s /usr/lib/libv4l2.so.0.0.0 /usr/lib/libv4l2.so.0
    ln -s /usr/lib/libv4l2rds.so.0.0.0 /usr/lib/libv4l2rds.so.0
    
    mv /usr/lib/gstreamer-1.0/libgstnvvideo4linux2.so /usr/lib/gstreamer-1.0/libgstnvvideo4linux2.so.bckp
    cp libgstnvvideo4linux2.so /usr/lib/gstreamer-1.0/libgstnvvideo4linux2.so
    
3 Likes

This is incredible engineering @Langhalsdino. Wow. :raised_hands:

@acostach might be interested in those notes above.

@langhalsdino did you ever publish this? would be great to see the end result!

Hi, @Langhalsdino
Just wondering if there is any update here? :slight_smile:

Cheers!

Hi @Langhalsdino i’m also interested if you managed to publish the cleaned up repo.

Thanks,
Gerard.

Thanks @Langhalsdino for your long notes. The google bot did help me :smiley:

1 Like