Getting compilation error

DEBUG: Executing shell function do_compile
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 219B done
#2 DONE 0.0s

#3 [internal] load metadata for Docker
#3 ERROR: failed to authorize: failed to fetch anonymous token: Get “https://auth.docker.io/token?scope=repository%3Alibrary%2Fdebian%3Apull&service=registry.docker.io”: dial tcp: lookup auth.docker.io on 127.0.0.53:53: dial udp 127.0.0.53:53: connect: network is unreachable

[internal] load metadata for Docker


Dockerfile:1

1 | >>> FROM debian:stretch
2 |
3 | VOLUME /mnt/sysroot/inactive

ERROR: failed to solve: failed to fetch anonymous token: Get “https://auth.docker.io/token?scope=repository%3Alibrary%2Fdebian%3Apull&service=registry.docker.io”: dial tcp: lookup auth.docker.io on 127.0.0.53:53: dial udp 127.0.0.53:53: connect: network is unreachable
Error response from daemon: invalid reference format
WARNING: /home/balena-raspberrypi/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.37732:153 exit 1 from ‘DOCKER_API_VERSION=1.22 docker save “$IMAGE_ID” > /home/balena-raspberrypi/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/work/mkfs-hostapp-image.tar’
WARNING: Backtrace (BB generated script):
#1: do_compile, /home/balena-raspberrypi/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.37732, line 153
#2: main, /home/balena-raspberrypi/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.37732, line 160

@alexgg Can you please help us to resolve this error

Hi Varma, I provided a follow-up on a different post, the one that contains all the history. Basicallly, make sure the host is able to download docker images, try docker pull debian:stretch, and note that building behind a proxy is not supported.

ERROR: os-helpers-1.0-r0 do_test_api: os-helpers: API request failed
@alexgg

I experience the same error. For details see following [WIP] Orin Nano by acostach · Pull Request #212 · balena-os/balena-jetson-orin (github.com)

I have the exact some problem in update to kirkstone by dremsol · Pull Request #355 · balena-os/balena-up-board (github.com)

Just seen the following kernel-modules-headers: use kernel-devsrc to provide kernel headers by alexgg · Pull Request #3159 · balena-os/meta-balena (github.com) being merged. Could possibly solve my issues as well according to the changelog

running

❯ ./balena-jetson-orin/balena-yocto-scripts/build/barys \
    --remove-build \
    --development-image \
    --log \
    --shared-downloads /var/cache/yocto/shared-downloads \
    --shared-sstate /var/cache/yocto/sstate-cache \
    -m jetson-orin-nano-devkit-nvme

results in
log.do_image_hostapp_ext4.2117741.txt (13.3 KB)

and running

./balena-yocto-scripts/build/balena-build.sh -d jetson-orin-nano-devkit-nvme -s /var/cache/yocto/ -k -g "-r --rm-work"

results in

❯ cat balena-jetson-orin/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/log.do_compile
DEBUG: Executing shell function do_compile
Sending build context to Docker daemon   5.12kB
Step 1/5 : FROM debian:bullseye
bullseye: Pulling from library/debian
34df401c391c: Pulling fs layer
34df401c391c: Verifying Checksum
34df401c391c: Download complete
34df401c391c: Pull complete
Digest: sha256:a648e10e02af129706b1fb89e1ac9694ae3db7f2b8439aa906321e68cc281bc0
Status: Downloaded newer image for debian:bullseye
 ---> 189a2f977ff1
Step 2/5 : VOLUME /mnt/sysroot/inactive
 ---> Running in 7bdbf2024bc9
Removing intermediate container 7bdbf2024bc9
 ---> 86acc04b7c88
Step 3/5 : RUN apt-get update && apt-get install -y     ca-certificates         iptables
 ---> Running in 1c05927ea8e1
cgroups: cgroup mountpoint does not exist: unknown
WARNING: exit code 1 from a shell command.

@alexgg this seems to have something to do with my build host (Ubuntu-22.04) and Docker cgroups v1 vs. v2, do you have a clue how to solve this?

UPDATE:
Found the answer in following thread;

adding GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0" to /etc/default/grub, running sudo update-grub and rebooting the system works as a temporary solution until cgroupsv2 is supported.