yocto-build-scripts not documented clearly

The instructions for using balena-yocto-scripts/build/balena-build.sh aren’t very clear.

jkridner@slotcar:~/workspace/blue-nano/balena/yocto$ ./build.sh 
+++ dirname ./build.sh
++ cd -- .
++ pwd -P
+ SCRIPTPATH=/home/jkridner/workspace/XXX/balena/yocto
+ cd /home/jkridner/workspace/XXX/balena/yocto/balena-beaglebone
+ ./balena-yocto-scripts/build/balena-build.sh -d beaglebone -i balena-image-flasher -s /home/jkridner/workspace/blue-nano/balena/yocto/build
[balena_lib_environment]: Defaulting to balena-cloud.com
~/workspace/XXX/balena/yocto/balena-beaglebone ~/workspace/XXX/balena/yocto/balena-beaglebone
Submodule details:

 99807501efffc8c5034c88361049650a02511a78 balena-yocto-scripts (v1.19.12)
 39a79c43f1b8ab4426d7a9c1cdeb9a9514101061 contracts (v2.0.12)
 49046a8af48a37c0fdc75e085e86c52fad474199 layers/meta-arm (yocto-3.1.1-18-g49046a8)
 094cc1766365844e9e4dcf46f4f247cad0231715 layers/meta-balena (v2.101.11)
 f2d02cb71eaff8eb285a1997b30be52486c160ae layers/meta-openembedded (f2d02cb71)
 f56fd4ec6835d68c4e9f8a9630963acd34d172e5 layers/meta-rust (remotes/origin/common-rust-native-20-gf56fd4e)
 9dce84ef28a1b5a782193f9715435f0f5c93a2ae layers/meta-ti (07.01.00.006-3-g9dce84ef)
 012ad10a89a889c21e67c27dc37d22520212548f layers/poky (dunfell-23.0.3)
~/workspace/XXX/balena/yocto/balena-beaglebone
9980750: Pulling from balena/yocto-build-env
Digest: sha256:8c339c19b67f15e8f9c11041f7144c1d35eb0c95b1c6ec88147895dc3fa94250
Status: Image is up to date for balena/yocto-build-env:9980750
docker.io/balena/yocto-build-env:9980750
[INFO] Creating and setting builder user 1000:1000.
unix:///var/run/docker.sock /var/run/docker.pid
fatal: not a git repository: /work/../../../.git/modules/balena/yocto/balena-beaglebone

I’m very unclear what the script is looking for and how I should provide it.

Per

work_dir="$( cd "${script_dir}/../.." && pwd )"

I guess the script expects balena-beaglebone to be the work-dir, however it further seems to desire for it to contain a .git directory. In my case, it is a submodule and therefore .git is a file and not a directory.

Wow, That was a headache. I couldn’t find any way to specify with git to move the metainformation under balena-beaglebone/.git, rather than having .git be a file with gitdir defined. I manually moved the information then massaged all of the .git files to point to the new path for metainformation (git-dir) and all of the config files to point to the new paths for all the work-directories. Ugh.

Anyway, all seems to be building now. Because only balena-beaglebone is exported as a workdir and the git metainformation is used, I don’t see a way around having real git metainformation in balena-beaglebone/.git. It would be nice if someone could find out how to get it there if balena-beaglebone itself is a submodule of another project.

ERROR: mkfs-hostapp-native-1.0-r0 do_compile: Execution of '/work/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.790731' failed with exit code 1:
cgroups: cgroup mountpoint does not exist: unknown
Error response from daemon: No such image: e665ae2b1d2d
WARNING: exit code 1 from a shell command.

ERROR: Logfile of failure stored in: /work/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/log.do_compile.790731
Log data follows:
| DEBUG: Executing shell function do_compile
| cgroups: cgroup mountpoint does not exist: unknown
| Error response from daemon: No such image: e665ae2b1d2d
| WARNING: exit code 1 from a shell command.
| ERROR: Execution of '/work/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.790731' failed with exit code 1:
| cgroups: cgroup mountpoint does not exist: unknown
| Error response from daemon: No such image: e665ae2b1d2d
| WARNING: exit code 1 from a shell command.
| 
NOTE: recipe mkfs-hostapp-native-1.0-r0: task do_compile: Failed
ERROR: Task (/work/build/../layers/meta-balena/meta-balena-common/recipes-containers/mkfs-hostapp-native/mkfs-hostapp-native.bb:do_compile) failed with exit code '1'
$ cat balena-beaglebone/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/log.do_compile.790731
DEBUG: Executing shell function do_compile
cgroups: cgroup mountpoint does not exist: unknown
Error response from daemon: No such image: e665ae2b1d2d
WARNING: exit code 1 from a shell command.
ERROR: Execution of '/work/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.790731' failed with exit code 1:
cgroups: cgroup mountpoint does not exist: unknown
Error response from daemon: No such image: e665ae2b1d2d
WARNING: exit code 1 from a shell command.

Hi Jason, I am afraid that the scripts in balena-yocto-scripts are conditioned to balena-yocto-scripts being a submodule in a git project. So the device project must be created with:

git clone --recursive <URL>

About the build errors, is your Linux distribution using cgroups v2? If so, could you try to revert to v1 with the following kernel command line arguments see if the issue persists:

systemd.unified_cgroup_hierarchy=0
systemd.legacy_systemd_cgroup_controller

Where do I apply those settings? I’m trying to rebuild balena-beaglebone (beaglebone.coffee) as-is to begin, prior to me making any modifications.

To revert back to using cgroups v1, you need to add kernel command line parameters to the host system (the workstation you are using to build balenaOS). You can can find instructions for example in https://linuxconfig.org/how-to-set-kernel-boot-parameters-on-linux.
I suggest you initially edit the grub boot entry to append those settings so they apply only to the next boot. As a reminder, the arguments to add for a systemd enabled Linux distribution are systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller.

1 Like