Weston/Wayland balenaOS for Jetson Nano

Hello folks!

I’m trying to get Wayland on a balenaOS container in our Jetson Nano.

The Dockerfile is just this:

FROM balenalib/jetson-nano-ubuntu:latest-run

WORKDIR /app

RUN install_packages \
    evtest \
    weston

COPY . ./

ENV INITSYSTEM on
ENV UDEV=1

CMD [ "/bin/bash", "/app/start.sh" ]

The start script is:

#!/bin/bash
export DBUS_SYSTEM_BUS_ADDRESS=unix:path=/host/run/dbus/system_bus_socket

export XDG_RUNTIME_DIR="/run/shm/wayland"
mkdir -p "$XDG_RUNTIME_DIR"
chmod 0700 "$XDG_RUNTIME_DIR"

/usr/bin/weston --backend=drm-backend.so --tty=1

When the container first starts, it give me a message saying there is no DRM device. After a few restarts, it start saying the /dev/tty1 is already in graphics mode…

Date: 2019-05-28 UTC
[10:54:57.243] weston 3.0.0
               http://wayland.freedesktop.org
               Bug reports to: https://bugs.freedesktop.org/enter_bug.cgi?product=Wayland&component=weston&version=3.0.0
               Build: unknown (not built from git or tarball)
[10:54:57.243] Command line: /usr/bin/weston --backend=drm-backend.so --tty=1
[10:54:57.243] OS: Linux, 4.9.140-l4t-r32.1+g0e2f66e, #1 SMP PREEMPT Thu Apr 25 13:17:54 UTC 2019, aarch64
[10:54:57.243] Starting with no config file.
[10:54:57.243] Output repaint window is 7 ms maximum.
[10:54:57.243] Loading module '/usr/lib/aarch64-linux-gnu/libweston-3/drm-backend.so'
[10:54:57.246] initializing drm backend
[10:54:57.247] logind: not running in a systemd session
[10:54:57.247] logind: cannot setup systemd-logind helper (-61), using legacy fallback
[10:54:57.247] /dev/tty1 is already in graphics mode, is another display server running?
[10:54:57.247] fatal: drm backend should be run using weston-launch binary or as root
[10:54:57.247] fatal: failed to create compositor backend

As you may see on the Dockerfile, we’re using the balenalib image for this board, so I guess the nvidia drivers are already in place…

Can someone give a clue on how to get Wayland to work on this device using BalenaOS?

Thank you!

Hi, I saw that you already opened an issue for this in the balena-wayland playground repo.
I will pass this to the respective team for further investigation.

Yep! I did, thanks! Looking forward to get it working :slight_smile:

Hello!

Any updates on that?

Thanks

Hey, we have informed the core maintainer of the library about it, and they will have a look at it when possible. Thanks!

Library? You mean BalenaLib?

Hi @galvesribeiro the balenalib base images don’t install any of the nvidia specific binaries, you will need to install those yourself from whatever PPA nvidia provides for this.

Okey,

Nvidia provides this https://developer.nvidia.com/embedded/dlc/l4t-jetson-driver-package-32-1-jetson-nano driver package, but I have no idea how to install it on the container as it points only on how to build images based on it…

@galvesribeiro the nano is still a very new platform, so I don’t have a good idea of exactly how to do this, but the best place to start is to look at this demo project https://github.com/acostach/nano-sample-app and try adapt it to install the needed packages for wayland.

Oh!! That is great! :slight_smile: Thanks for the tip. Will check on that and report back.

Thanks!

No problem, please let me know how it goes. we are actively learning about the limits and features of this device, so its interesting to know.

1 Like

No success… Same problem…

I was wondering… On Asus Tinkerboard I have the Mali GPU on /dev/dri/ however, on this board I dont:

root@906686c:/app# ls /dev/
autofs                  hugepages        media0              null               pts         ram3               tty12  tty38  tty63    urandom
block                   i2c-0            mem                 nvhost-as-gpu      ptyp0       ram4               tty13  tty39  tty7     vcs
bus                     i2c-1            memory_bandwidth    nvhost-ctrl        ptyp1       ram5               tty14  tty4   tty8     vcs1
char                    i2c-2            min_online_cpus     nvhost-ctrl-gpu    ptyp2       ram6               tty15  tty40  tty9     vcs2
console                 i2c-3            mmcblk0             nvhost-ctrl-isp    ptyp3       ram7               tty16  tty41  ttyS0    vcs3
constraint_cpu_freq     i2c-4            mmcblk0p1           nvhost-ctrl-isp.1  ptyp4       ram8               tty17  tty42  ttyS1    vcs4
constraint_gpu_freq     i2c-5            mmcblk0p10          nvhost-ctrl-nvdec  ptyp5       ram9               tty18  tty43  ttyS2    vcs5
constraint_online_cpus  i2c-6            mmcblk0p11          nvhost-ctrl-vi     ptyp6       random             tty19  tty44  ttyS3    vcs6
cpu_dma_latency         iio:device0      mmcblk0p12          nvhost-ctxsw-gpu   ptyp7       rfkill             tty2   tty45  ttyTHS1  vcsa
cpu_freq_max            initctl          mmcblk0p13          nvhost-dbg-gpu     ptyp8       rtc                tty20  tty46  ttyTHS2  vcsa1
cpu_freq_min            input            mmcblk0p14          nvhost-gpu         ptyp9       rtc0               tty21  tty47  ttyp0    vcsa2
cuse                    keychord         mmcblk0p15          nvhost-isp         ptypa       rtc1               tty22  tty48  ttyp1    vcsa3
disk                    kmem             mmcblk0p16          nvhost-isp.1       ptypb       shm                tty23  tty49  ttyp2    vcsa4
emc_freq_min            kmsg             mmcblk0p2           nvhost-msenc       ptypc       snd                tty24  tty5   ttyp3    vcsa5
fb0                     log              mmcblk0p3           nvhost-nvdec       ptypd       stderr             tty25  tty50  ttyp4    vcsa6
fb1                     loop-control     mmcblk0p4           nvhost-nvjpg       ptype       stdin              tty26  tty51  ttyp5    vfio
fd                      loop0            mmcblk0p5           nvhost-prof-gpu    ptypf       stdout             tty27  tty52  ttyp6    vhci
full                    loop1            mmcblk0p6           nvhost-sched-gpu   quadd       tegra-crypto       tty28  tty53  ttyp7    watchdog
fuse                    loop2            mmcblk0p7           nvhost-tsec        quadd_auth  tegra_camera_ctrl  tty29  tty54  ttyp8    watchdog0
gpiochip0               loop3            mmcblk0p8           nvhost-tsecb       ram0        tegra_dc_0         tty3   tty55  ttyp9    zero
gpiochip1               loop4            mmcblk0p9           nvhost-tsg-gpu     ram1        tegra_dc_1         tty30  tty56  ttypa
gpu_freq_max            loop5            mqueue              nvhost-vi          ram10       tegra_dc_ctrl      tty31  tty57  ttypb
gpu_freq_min            loop6            mtd0                nvhost-vic         ram11       tegra_mipi_cal     tty32  tty58  ttypc
hidraw0                 loop7            mtd0ro              nvmap              ram12       tty                tty33  tty59  ttypd
hidraw1                 mapper           mtdblock0           port               ram13       tty0               tty34  tty6   ttype
hidraw2                 max_cpu_power    net                 ppp                ram14       tty1               tty35  tty60  ttypf
hidraw3                 max_gpu_power    network_latency     psaux              ram15       tty10              tty36  tty61  uhid
hidraw4                 max_online_cpus  network_throughput  ptmx               ram2        tty11              tty37  tty62  uinput

Isn’t that the real problem here?

it looks like there is a /dev/fb0 there. Not sure if this device has dri. What does the standard ubuntu system have and does wayland work there?

Yeah. It works just fine on regular L4T with the drm-backend.so

okay, and does the ubuntu image have /dev/dri? and how is or where is drm-backend.so installed from. In theory if the kernel modules are installed and you get drm-backend.so in your privileged container, it should all work. Its just a matter of figuring out how to install that file.

The backend comes with the weston package installed from apt. I’ll check the /dev/dri in a moment. Out of a PC now

If you can figure out the exact combination of kernel modules and things needed in /dev, im sure we can get it supported eventually.

I just confirmed… NVidia doesn’t mount on /dev/dri… Assuming that Balena is using NVidia-provided kernel, the modules are already there but, the userland lib/firmware files are not.

Regarding the /lib/firmware folder, note that it is possible to “bind mount” it on an app container by setting the io.balena.features.firmware label in the docker-compose.yml file (or using a single-container app with a single Dockerfile):

With that in place, you can add firmware files through your app container. The /lib/firmware folder will be initially mounted “read only” on the app container, but you can remount it “read write” before copying the file with the command:

# on the app container
mount -o remount,rw /lib/firmware

I am assuming that drm-backend.so is a firmware file that would benefit from the process above, and it sounds like we are all assuming that the kernel already has the required modules in place… If the kernel has the required modules and the hardware is present, but a firmware file is missing, it is often the case that an error message can be found in the output of the dmesg command on the host OS.