Yocto Common Errors: Difference between revisions
No edit summary |
|||
Line 118: | Line 118: | ||
= Fetching problems = | = Fetching problems = | ||
== GitHub: bandwidth limitations == | |||
Modern PCs allow running with up to 24-32 or even more parallel thread. | |||
During the very first Yocto build, several fetching tasks may run simultaneously. | |||
It may happen that GitHub may suspend some of them to limit the bandwidth. The typical message is | |||
LANG=C git -c gc.autoDetach=false -c core.pager=cat clone --bare --mirror <remote.URL> <local.URL> --progress '''failed with exit code 128, no output''' | |||
If you experience this behavior, wait for the build to stop (without forcing it to stop), then try running the fetch task only | |||
$ bitbake -f -c fetch <recipe-name> | |||
and then restart the normal builb | |||
== GitHub: unsafe repositories == | == GitHub: unsafe repositories == |
Revision as of 07:15, 24 July 2024
Hardware resources
Yocto builds may strongly impact your HW resources in terms of
- required disk space
- required RAM
- CPU overheating
Required disk space
Since Yocto will download the source code that will be used to build the packages included in the image, consider reserving at least 300 GB of free space.
Using a SSD will extremely speed up the build.
No space left on device or exceeds fs.inotify.max_user_watches
This error occurs when the filesystem runs out of space or the limit for inotify watches is reached. Inotify is a Linux kernel subsystem that monitors file system events, and the `fs.inotify.max_user_watches` parameter controls the maximum number of file watches per user. When building Yocto, a large number of files may be opened and monitored, potentially exceeding this limit.
Check the current limit for inotify watches:
$ sysctl -n fs.inotify.max_user_watches 65536
Increase the limit to a higher value (e.g., 524288):
$ sudo sysctl -w fs.inotify.max_user_watches=524288
Add the following line to `/etc/sysctl.conf` to persist the change across reboots:
fs.inotify.max_user_watches=524288
Required RAM
Minimum RAM Recommendations
Yocto will check for the host CPU feature and will try to use all of them in order to speed-up the build.
Just as an example,
- with an i7-7700K (4 cores / 8 threads), Yocto will activate 8 building threads
- with an i7-9700K (8 cores / 8 threads), Yocto will activate 8 building threads
- with an i7-11700K (8 cores / 16 threads), Yocto will activate 16 building threads
- with an i7-14700k (20 cores / 28 threads), Yocto will activate 28 building threads
Each building thread will require a dedicated amount of RAM, that can easily rise above 3 GB.
To safely run 8 building threads, plan to have 32 (8*4) GB of RAM.
To safely run 16 building threads, plan to have 64 (16*4) GB of RAM.
Reducing RAM Requirements
If you are unable to install additional RAM, you can forcibly reduce the number of building threads by setting the variable BB_NUMBER_THREADS in your local.conf to something like (available RAM in GB / 4). This also means that is not reasonable run a Yocto build with less then 4 GB of RAM.
A typical settings for a 16 GB RAM system is
BB_NUMBER_THREADS = "4" PARALLEL_MAKE = "-j 4"
Typical errors due to low memory are
collect2: fatal error: ld terminated with signal 9 [Killed] compilation terminated.
or:
ERROR: Worker process (12928) exited unexpectedly (-9), shutting down... ERROR: Worker process (12928) exited unexpectedly (-9), shutting down... ...
or:
fatal error: Killed signal terminated program compilation terminated.
Using Swap memory
If you have a fast NVMe hard drive, you can use a swap file for additional memory. This is slower than installing dedicated RAM, but can help in rare situations when you would normally run out of memory compiling a large package like Qt, Chromium, or NodeJS.
To begin, check if you already have a swap file:
$ swapon --show NAME TYPE SIZE USED PRIO /swap.img file 8G 3.6G -2
If you already have a swap file and want to resize it, disable it:
$ sudo swapoff /swap.img
Then, resize the swap file to something larger, like 32G:
$ sudo fallocate -l 32G /swap.img $ sudo chmod 600 /swap.img $ sudo mkswap /swap.img
Re-enable the swap file:
$ sudo swapon /swap.img
Verify the swap space is working:
$ swapon --show NAME TYPE SIZE USED PRIO /swap.img file 32G 0B -2 $ free -h total used free shared buff/cache available Mem: 31Gi 28Gi 1.2Gi 5.3Gi 6.8Gi 2.2Gi Swap: 31Gi 0B 31Gi
If you did not have a swap file to begin, add the swap file to etc/fstab for persistence after reboots:
$ echo '/swap.img none swap sw 0 0' | sudo tee -a /etc/fstab
CPU overheating
Being Yocto extremely resource hungry, a build can easily lead the building host to an overheat condition, with merely unpredictable side effects, ranging from crashes to memory corruptions.
We strongly suggest to keep the CPU core temperature under control.
Fetching problems
GitHub: bandwidth limitations
Modern PCs allow running with up to 24-32 or even more parallel thread.
During the very first Yocto build, several fetching tasks may run simultaneously.
It may happen that GitHub may suspend some of them to limit the bandwidth. The typical message is
LANG=C git -c gc.autoDetach=false -c core.pager=cat clone --bare --mirror <remote.URL> <local.URL> --progress failed with exit code 128, no output
If you experience this behavior, wait for the build to stop (without forcing it to stop), then try running the fetch task only
$ bitbake -f -c fetch <recipe-name>
and then restart the normal builb
GitHub: unsafe repositories
On April 12, 2022, GitHub announced the need to upgrade the local installation of Git. For more information, please see https://github.blog/2022-04-12-git-security-vulnerability-announced/
Something reasonably changed also in GitHub repositories: starting from the above date, some GitHub users are experiencing error messages like
fatal: unsafe repository ('<repo-path>' is owned by someone else) To add an exception for this directory, call: git config --global --add safe.directory <repo-path>
The message already suggests the solution with the "git config" command, but if this is not working, you may want to updated to latest Git version.
On an Ubuntu machine, you can run
sudo add-apt-repository ppa:git-core/ppa -y sudo apt update sudo apt upgrade -y git config --global --add safe.directory '*'
This will update to the latest git version available for your machine and disable the safe check system-wide.
Please note that the option to disable the safe check system-wide is only available starting from Git v2.35.3
GitHub: Git protocol on port 9418
On January 11, 2022, Github disabled the Git protocol on 9418. For more information, please see https://github.blog/2021-09-01-improving-git-protocol-security-github/
This change will break all recipes using the Git protocol. Fortunately, starting in Yocto Pyro, Yocto will try fetching using HTTPS when the Git protocol fails. See: https://git.yoctoproject.org/poky/commit/meta/classes/mirrors.bbclass?h=pyro&id=85c41bfcf2c62e8b394a7f3d9efdf50af77ff960
For Yocto Morty and older, you will observe the following error:
fatal: remote error: The unauthenticated git protocol on port 9418 is no longer supported. Please see https://github.blog/2021-09-01-improving-git-protocol-security-github/ for more information.
This can be fixed by manually adding HTTPS mirrors to conf/local.conf:
MIRRORS += "\ git://anonscm.debian.org/.* git://anonscm.debian.org/git/PATH;protocol=https \n \ git://git.gnome.org/.* git://git.gnome.org/browse/PATH;protocol=https \n \ git://git.savannah.gnu.org/.* git://git.savannah.gnu.org/git/PATH;protocol=https \n \ git://git.yoctoproject.org/.* git://git.yoctoproject.org/git/PATH;protocol=https \n \ git://.*/.* git://HOST/PATH;protocol=https \n \ "
codeaurora.org migration
As of March 31, 2023, all codeaurora.org repositories have been migrated to other platforms and the project has been shut down. For more information, please visit: https://bye.codeaurora.org/
Yocto releases with recipes that rely on codeaurora.org may produce errors like the following:
ERROR: gstreamer1.0-plugins-good-1.20.0.imx-r0 do_fetch: Bitbake Fetcher Error: FetchError('Unable to fetch URL from any source.', 'gitsm://source.codeaurora.org/external/mx8-fslc-linux/gstreamer1.0-plugins-good/1.20.0.imximx/gst-plugins-good.git;protocol=https;branch=MM_04.07.00_2205_L5.15.y')
This can be fixed by manually adding the NXP GitHub mirrors to the conf/local.conf file:
MIRRORS += " \ git://source.codeaurora.org/external/imx/ git://github.com/nxp-imx/;protocol=https \n \ https://source.codeaurora.org/external/imx/ https://github.com/nxp-imx/ \n \ http://source.codeaurora.org/external/imx/ http://github.com/nxp-imx/ \n \ gitsm://source.codeaurora.org/external/imx/ gitsm://github.com/nxp-imx/;protocol=https \n \ "
Firewalls / proxies constraints
Firewalls and proxies may block the access to specific websites or protocols (like git).
Double check with the IT team that you are not experiencing any access restriction during the fetching phase.
Certificates validation
Messages like
server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
indicate that the certificates on your machine are outdated.
You can try the following commands:
$ sudo apt update $ sudo apt upgrade
and then check if the latest version of the CAcertificate package can solve the problem.
Note that starting from January 2022, github dropped git:// support and now only https:// (encrypted) access is allowed.
If your Ubuntu release reached its EOL, no further CA certificates updates are available: the installed certificates will systematically expire.
In such case, to avoid git cloning errors, disable https verification (based on CA certificates) by running:
$ git config --global http.sslverify "false"
Can't find package
Due to server maintenance sometime it will fail to fetch the source.
Search the exact file name and version and download it from other source into your download directory.
Checksum mismatch
Sometime you will get an error as listed below.
The reason is due to re packaging on the server of the provider.
locate the pacakage bb file
$ find ../sources/ -name evtest*
./meta-openembedded/meta-oe/recipes-support/evtest
./meta-openembedded/meta-oe/recipes-support/evtest/evtest_1.25.bb
edit the pacakage
$ gedit ./meta-openembedded/meta-oe/recipes-support/evtest/evtest_1.25.bb
And replace:
SRC_URI[archive.md5sum] = "770d6af03affe976bdbe3ad1a922c973"
SRC_URI[archive.sha256sum] = "3d34123c68014dae6f7c19144ef79ea2915fa7a2f89ea35ca375a9cf9e191473"
With:
SRC_URI[archive.md5sum] = "0ef3fe5e20fa2dee8994827d48482902"
SRC_URI[archive.sha256sum] = "6e93ef54f0aa7d263f5486ce4a14cac53cf50036bfd20cf045fef2b27ee6664b"
Error log example
ERROR: Fetcher failure for URL: 'http://cgit.freedesktop.org/~whot/evtest/snapshot/evtest-1.25.tar.bz2;name=archive'. Checksum mismatch!
File: '/ws/variscite/yocto_dl_dora/evtest-1.25.tar.bz2' has md5 checksum 0ef3fe5e20fa2dee8994827d48482902 when 770d6af03affe976bdbe3ad1a922c973 was expected
File: '/ws/variscite/yocto_dl_dora/evtest-1.25.tar.bz2' has sha256 checksum 6e93ef54f0aa7d263f5486ce4a14cac53cf50036bfd20cf045fef2b27ee6664b when 3d34123c68014dae6f7c19144ef79ea2915fa7a2f89ea35ca375a9cf9e191473 was expected
If this change is expected (e.g. you have upgraded to a new version without updating the checksums) then you can use these lines within the recipe:
SRC_URI[archive.md5sum] = "0ef3fe5e20fa2dee8994827d48482902"
SRC_URI[archive.sha256sum] = "6e93ef54f0aa7d263f5486ce4a14cac53cf50036bfd20cf045fef2b27ee6664b"
Otherwise you should retry the download and/or check with upstream to determine if the file has become corrupted or otherwise unexpectedly modified.
ERROR: Function failed: Fetcher failure for URL: 'http://cgit.freedesktop.org/~whot/evtest/snapshot/evtest-1.25.tar.bz2;name=archive'. Unable to fetch URL from any source.
ERROR: Logfile of failure stored in: /home/variscite/var-som-mx6-dora-v5/build_var/tmp/work/cortexa9hf-vfp-neon-poky-linux-gnueabi/evtest/1.25-r0/temp/log.do_fetch.2662
NOTE: recipe evtest-1.25-r0: task do_fetch: Failed
ERROR: Task 2966 (/home/variscite/var-som-mx6-dora-v5/sources/meta-openembedded/meta-oe/recipes-support/evtest/evtest_1.25.bb, do_fetch) failed with exit code '1'
NOTE: Running task 2366 of 6931 (ID: 2848, /home/variscite/var-som-mx6-dora-v5/sources/meta-openembedded/meta-oe/recipes-benchmark/nbench-byte/nbench-byte_2.2.3.bb, do_fetch)
kernel compile errors
Some users reported the following error
fatal error: yaml.h: No such file or directory
Although most of the distro already provide the relevant packages, some do not.
Running the following command on the host PC should fix the problem
$ sudo apt install libyaml-dev
clang compile errors
With specific c / c++ version installed, clang (required to build packages like chromium) may fail to build.
When this happens, check the content of the folder
$ ls /usr/lib/gcc/x86_64-linux-gnu/
and whatever N is the max number (9,10,11,12) shown, ensure that the matching packages are already installed
$ sudo apt install gcc-<N>-multilib g++-<N>-multilib
UBI size is too small (max_leb_cnt too low)
Sometimes when building a large image the system fails, because the image cannot fit into the default 512MB NAND flash.
Log data follows: | DEBUG: Executing python function set_image_size | DEBUG: Python function set_image_size finished | DEBUG: Executing shell function do_image_ubi | Error: max_leb_cnt too low (4094 needed) | ERROR: Function failed: do_image_ubi
There are three ways to overcome this issue:
- If you are using a SOM with eMMC, use the eMMC to store the filesystem, and remove the ubi image type from the build process by adding the following line to the conf/local.conf file in your build directory:
IMAGE_FSTYPES_remove_<MACHINE> = "ubi multiubi" (Replace <MACHINE> with either var-som-mx6, imx6ul-var-dart or var-som-mx7, according to the SOM you are using)
- If you have a SOM with a 512MB NAND flash and no eMMC, you'll have to remove packages from the image you are building in order to make it fit, or just use a different and smaller image.
- If you have a 1GB NAND flash on the SOM and you want to use it to hold the filesystem, edit the conf/machine/include/variscite.inc file under meta-variscite, comment out the 512MB NAND parameters and uncomment the appropriate 1GB NAND parameters.