This commit slightly improves the output of pkg-stats by showing the
progress of the upstream URL checks and latest version retrieval, on a
package basis:
Checking URL status
[0001/0062] curlpp
[0002/0062] cmocka
[0003/0062] snappy
[0004/0062] nload
[...]
[0060/0062] librtas
[0061/0062] libsilk
[0062/0062] jhead
Getting latest versions ...
[0001/0064] libglob
[0002/0064] perl-http-daemon
[0003/0064] shadowsocks-libev
[...]
[0061/0064] lua-flu
[0062/0064] python-aiohttp-security
[0063/0064] ljlinenoise
[0064/0064] matchbox-lib
Note that the above sample was run on 64 packages. Only 62 packages
appear for the URL status check, because packages that do not have any
URL in their Config.in file, or don't have any Config.in file at all,
are not checked and therefore not accounted.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit reworks the code that checks if the upstream URL of each
package (specified by its Config.in file) using the aiohttp
module. This makes the implementation much more elegant, and avoids
the problematic multiprocessing Pool which is causing issues in some
situations.
Suggested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit reworks the code that retrieves the latest upstream
version of each package from release-monitoring.org using the aiohttp
module. This makes the implementation much more elegant, and avoids
the problematic multiprocessing Pool which is causing issues in some
situations.
Since we're now using some async functionality, the script is Python
3.x only, so the shebang is changed to make this clear.
Suggested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
BR2_VERSION_FULL is currently defined as follows:
BR2_VERSION_FULL := $(BR2_VERSION)$(shell $(TOPDIR)/support/scripts/setlocalversion)
This BR2_VERSION_FULL value then gets used as the "VERSION" variable
in the /etc/os-release file.
The logic of "setlocalversion" is that if it is exactly on a tag, it
returns nothing.
If it is on a tag + a number of commits, then it returns only
-XYZ-gABC where XYZ is the number of commits since the last tag, and
ABC the git commit hash (these are extracted from git describe).
This output then gets concatenated to BR2_VERSION which gives
something like 2020.05 or 2020.05-00123-g5bc6a.
The issue is that when you're on a tag specific to your project, which
is not a Buildroot YYYY.MM tag, then the output of setlocalversion is
empty, and all you get as VERSION in os-release is $(BR2_VERSION)
which is not really nice. Worse, if you have another non-official
Buildroot tag between the last official Buildroot tag/version and
where you are, you will get $(BR2_VERSION)-XYZ-gABC, but XYZ will not
correspond to the number of commits since BR2_VERSION, but since the
last tag that "git describe" as found, which is clearly incorrect.
Here is an example: you're on master, "make print-version" (which
displays BR2_VERSION_FULL) will show:
$ make print-version
2020.08-git-00758-gc351877a6e
So far so good. Now, you create a tag say 5 commits "before" master,
and show BR2_VERSION_FULL again:
$ git tag -a -m "dummy tag" dummy-tag HEAD~5
$ make print-version
2020.08-git-00005-gc351877a6e
This makes you believe you are 5 commits above 2020.08, which is
absolutely wrong.
So this commit simplifies the logic of setlocalversion to simply
return what "git describe" provides, and not prepend $(BR2_VERSION) in
the main Makefile. Since official Buildroot tags match official
Buildroot version names, you get the same output when you're on an
official Buildroot tag, or some commits above a Buildroot tag. An in
other cases, you get a sensible output. The logic is also adjusted for
the Mercurial case.
In the above situation, with this commit applied, we get:
$ make print-version
dummy-tag-6-g6258cdddeb
(6 commits instead of 5 as we have this very commit applied, but at
least it's 6 commits on top of the dummy-tag)
Finally, if you're not using a version control system, setlocalversion
was already returning nothing, so in this case, the Makefile simply
sets BR2_VERSION_FULL to BR2_VERSION to preserve this behavior.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The defconfig check has been introduced by the previous
patch before the building each defconfig but those builds
are done every week or more.
Checking if a defconfig is valid can be done on every
push in the repository since it take few seconds.
This would allow to detect as soon as possible a problem
in a defconfig and eventually avoid breaking the build
while build testing all defconfig.
Introduce a new job template ".defconfig_check" in
gitlab-ci.yml.in and modify the generate-gitlab-ci-yml
to create a job for each defconfig to run the test.
Although, we could have used only one job to do all
tests, using one job per defconfig allow to identify
easily in gitlab which defconfig is falling.
Tested:
https://gitlab.com/kubu93/buildroot/pipelines/138331069https://gitlab.com/kubu93/buildroot/pipelines/171223758
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
For the same reason as for 50b747f212,
we need to check if the generated configuration file (.config)
contains all symbols present in the defconfig file.
If not there is an issue with the defconfig.
This script will be used in .gitlab-ci.yml.
Inspired by is_toolchain_usable() function from genrandconfig:
https://git.busybox.net/buildroot/tree/utils/genrandconfig?h=2020.02#n164
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
[yann.morin.1998@free.fr:
- strip defconfig lines when reading them
- use a generator to read the defconfig lines
- no need to strip() again when building the missing list
- testing the list directly, not its len()
- simply sys.exit(1) in the error condition
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This fixes the following flake8 warning:
support/scripts/pkg-stats:1005:9: E117 over-indented
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
With python 3, when a package has a version number x-y-z instead of
x.y.z, then the version returned by LooseVersion can't be compared
which raises a TypeError exception:
Traceback (most recent call last):
File "./support/scripts/pkg-stats", line 1062, in <module>
__main__()
File "./support/scripts/pkg-stats", line 1051, in __main__
check_package_cves(args.nvd_path, {p.name: p for p in packages})
File "./support/scripts/pkg-stats", line 613, in check_package_cves
if pkg_name in packages and cve.affects(packages[pkg_name]):
File "./support/scripts/pkg-stats", line 386, in affects
return pkg_version <= cve_affected_version
File "/usr/lib64/python3.8/distutils/version.py", line 58, in __le__
c = self._cmp(other)
File "/usr/lib64/python3.8/distutils/version.py", line 337, in _cmp
if self.version < other.version:
TypeError: '<' not supported between instances of 'str' and 'int'
This patch handles this exception by adding a new return value when
the comparison can't be done. The code is adjusted to take of this
change. For now, a return value of CVE_UNKNOWN is handled the same way
as a CVE_DOESNT_AFFECT return value, but this can be improved later
on.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The error is misleading: it reports that no name was provided,
when in fact the external.desc file is missing.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>p
Reviewed-by: Romain Naour <romain.naour@gmail.com>
When a br2-external tree has an issue, e.g. a missing file, or does not
have a name, or the name uses invalid chars, we report that condition by
setting the variable BR2_EXTERNAL_ERROR.
That variable is defined in the script support/scripts/br2-external,
which outputs it on stdout, and checked by the Makefile.
Before d027cd75d0, stdout was explicitly redirected to the generated
.mk file, with exec >"${ofile}" as the Makefile and Kconfig
fragments were generated each with their own call to the script, and
the validation phase would emit the BR2_EXTERNAL_ERROR variable in the
Makefile fragment.
But with d027cd75d0, both the Makefile and Kconfig fragments were now
generated with a single call to the script, and as such the semantics of
the scripts changed, and only each of the actual generators, do_mk and
do_kconfig, had their out put redirected. Which left do_validate with
the default stdout. Which would emit BR2_EXTERNAL_ERROR on stdout.
In turn, the stdout of the script would be interpreted by as part of the
Makefile. But this does not end up very well when a br2-external tree
indeed has an error:
- missing a external.desc file:
Makefile:184: *** multiple target patterns. Stop.
- empty external.desc file:
Config.in:22: can't open file "output/.br2-external.in.paths"
So we must redirect the output of the validation step to the
Makefile fragment, so that the error message is correctly caught by the
top-level Makefile.
Note that we don't need to append in do_mk, and we can do an overwrite
redirection: if we go so far as to call do_mk, it means there was no
error, and thus the fragment is empty.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Tested-by: Romain Naour <romain.naour@gmail.com>
As reported by a gitlab runtime test [1] and on the mailing list
[2], some runtime tests are failing on slow host machines when
the qemu-system-<arch> is missing on the host.
The boot-qemu-image.py script need to wait some time after
calling pexpect.spawn() in order to make sure that the qemu
process has been executed in start-qemu.sh.
If start-qemu.sh failed due to missing qemu-system binary
an exception will be thrown by child.expect() and should be
catched by the error handling (pexpect.EOF).
After spending a lot of time to investigate with Yann E. MORIN
[3]. It seems that short-lived child processes are a corner-case
that is not very correctly handled...
Without adding a sleep(1), child.expect() can trigger an
exception before setting the exitstatus of the spawned
process. This issue can be reproduced on a gitlab runner or
by adding "exit 1" in the first line of start-qemu.sh
(after the shebang).
There is even the same workaround in some pexpect examples [4].
Thanks to Yann for the help while investigating the issue.
Tested:
https://gitlab.com/kubu93/buildroot/pipelines/138472925
[1] https://gitlab.com/kubu93/buildroot/pipelines/135487475
[2] http://lists.busybox.net/pipermail/buildroot/2020-April/280037.html
[3] http://patchwork.ozlabs.org/project/buildroot/patch/20200418161023.1221799-1-romain.naour@gmail.com/
[4] https://github.com/pexpect/pexpect/blob/master/examples/ssh_tunnel.py#L80
Fixes:
https://gitlab.com/kubu93/buildroot/-/jobs/509053135
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
[yann.morin.1998@free.fr: reorder imports]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This script is intended to be used by gitlab CI to test at runtime Qemu
images generated by Buildroot's Qemu defconfigs.
This allows to troubleshoot different issues that may be associated with
defective builds by lanching a qemu machine, sending root password,
waiting for login shell and then perform a shutdown.
This script is inspired by toolchain builder [1] and the Buildroot
testing infrastructure.
The gitlab CI will call this script for each defconfig build but only
Qemu defconfig will be runtime tested, all others defconfig are ignored.
Some Qemu defconfig must be used with a specific Qemu version (fork)
that is not always available, so the script doesn't error out when it
can't spawn a missing command. That condition is anyway printed in the
log.
Finally, the script start Qemu like it's done for the Buildroot
testing infrastructure (using pexpect).
Note:
We noticed some timeout issues with pexpect when the Qemu machine is
powered off. That's because Qemu process doesn't stop even if the
system is halted (after "System halted"). So the script doesn't error
out when such timeout occure. The behaviour depends on the architecture
emulated by Qemu.
[1] https://github.com/bootlin/toolchains-builder/blob/master/build.sh
Signed-off-by: Jugurtha BELKALEM <jugurtha.belkalem@smile.fr>
Signed-off-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When the 'nvd-path', 'json' and 'html' are used like this:
--html ~/foo
then the tilde expansion is properly done by the shell. However, when
they are used like this:
--html=~/foo
The shell doesn't do the tilde expansion, and pkg-stats doesn't do
it. This commit modifies pkg-stats to ensure that tilde expansion is
done when parsing the 'nvd-path', 'json' and 'html' arguments.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
[Thomas: improve commit log]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
flake8 complains with:
support/scripts/pkg-stats:339:13: E722 do not use bare 'except'
Due to the construct:
try:
something
except:
print("some message")
raise
Which is in fact OK because the exception is re-raised. This issue is
discussed at https://github.com/PyCQA/pycodestyle/issues/703, and the
general agreement is that these "bare except" are OK, and should be
ignored from flake8 using a noqa statement.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
flake8 complains with:
pkg-stats:38:1: E402 module level import not at top of file
This is due to sys.path.append() being before the import from
getdeveloperlib, but we really need this sys.path.append() to be
before, so let's ignore this flake8 warning.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The C program inside check-kernel-headers.sh has two checking mode: a
strict and a loose one.
In strict mode, we want the kernel headers version declared by the
user to match exactly the one of the toolchain.
In loose mode, we want the kernel headers version of the toolchain to
be greater than or equal to the one declared by the user: this is used
when we have a toolchain that has newer headers than the latest
version known by Buildroot.
However, in loose mode, we continue to show the "Incorrect kernel
headers version" message, even though we then return a zero error
code. This is very confusing: you see an error displayed on the
terminal, but the build goes on.
We fix that by first doing the loose check first, and returning 0 if
it succeeds. And then we move on with the strict check where we want
the version to be identical.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
If there is no infra set or infra is virtual the status is set to 'na'.
This is done for the follwing checks:
- license
- license-files
- hash
- hash-license
- patches
- version
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This value can be used for later processing.
In the buildroot-stats application this is used to create links pointing
to the git repo of buildroot.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Unify the status check information. The status is stored in a tuple. The
first entry is the status that can be 'ok', 'warning' or 'error'. The
second entry is a verbose message.
The following checks are performed:
- url: status of the URL check
- license: status of the license presence check
- license-files: status of the license file check
- hash: status of the hash file presence check
- patches: status of the patches count check
- pkg-check: status of the check-package script result
- developers: status if a package has developers in the DEVELOPERS file
- version: status of the version check
With that status information the following variables are replaced:
has_license, has_license_files, has_hash, url_status
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Use the function 'parse_developers' function from getdeveloperlib that
collect the information about the developers and the files they
maintain. Then set the maintainer(s) to each package.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Remove the patch_count attribute and use a class property instead.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This patch changes the type of the latest_version variable to a dict.
This is for better readability/usability of the data. With this the json
output is more descriptive in later processing of the json output.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
During the CVE checking phase, we can still see a huge amount of
Python processes (actually 128) running on the host, even though
the CVE step is entirely ran in the main thread.
These are actually the worker processes spawned to check for the
packages URL statuses and the latest versions from release-monitoring.
This is because of an issue in Python's multiprocessing implementation:
https://bugs.python.org/issue34172
The problem was already there before the CVE matching step was
introduced, but because pkg-stat was terminating right after the
release-monitoring step, it went unnoticed.
Also, do not hold a reference to the multiprocessing pool from
the Package class, as this is not needed.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In Python 3, the functions from the subprocess module return bytes
(and no longer strings as in Python 2), which must be decoded for
further text operations.
Now, pkg-stats can be run in Python 3.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
It seems like throughout the series that the CVE pkg-stats support
went through, the support for ignoring CVEs in the per-package
<pkg>_IGNORE_CVES variable was forgotten.
Let's re-introduce this, which is now very simple thanks to the CVE
class, its .identifier() propertly and the .is_cve_ignored() method of
the Package class
Cc: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
During the CVE checking phase, we can still see a huge amount of
Python processes (actually 128) running on the host, even though
the CVE step is entirely ran in the main thread.
These are actually the worker processes spawned to check for the
packages URL statuses and the latest versions from release-monitoring.
This is because of an issue in Python's multiprocessing implementation:
https://bugs.python.org/issue34172
The problem was already there before the CVE matching step was
introduced, but because pkg-stat was terminating right after the
release-monitoring step, it went unnoticed.
Also, do not hold a reference to the multiprocessing pool from
the Package class, as this is not needed.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In Python 3, the functions from the subprocess module return bytes
(and no longer strings as in Python 2), which must be decoded for
further text operations.
Now, pkg-stats can be run in Python 3.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The NVD files that are used to build the list of CVEs affecting
Buildroot packages are quite large (a few hundreds MB of json),
and cause the pkg-stats scripts to have a huge memory footprint
(a few GB with Python 2.7).
However, because we only need to iterate on CVE items one by one,
we can process them in streaming (ie decoding one CVE at a time
from the JSON representation). Because the json module from the
python standard library does not support such a mode of operation,
we switch to the third-party package ijson, which is compatible
with both Python 2 and Python3.
To run the script with these modifications, one should install
the ijson python package. This can be done with pip:
`pip install ijson`. On Debian based distributions, this can
also be done with the apt package manager:
`apt install python-ijson`.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Reviewed-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Tested-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The NVD files that are used to build the list of CVEs affecting
Buildroot packages are quite large (a few hundreds MB of json),
and cause the pkg-stats scripts to have a huge memory footprint
(a few GB with Python 2.7).
However, because we only need to iterate on CVE items one by one,
we can process them in streaming (ie decoding one CVE at a time
from the JSON representation). Because the json module from the
python standard library does not support such a mode of operation,
we switch to the third-party package ijson, which is compatible
with both Python 2 and Python3.
To run the script with these modifications, one should install
the ijson python package. This can be done with pip:
`pip install ijson`. On Debian based distributions, this can
also be done with the apt package manager:
`apt install python-ijson`.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Reviewed-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Tested-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
It seems like throughout the series that the CVE pkg-stats support
went through, the support for ignoring CVEs in the per-package
<pkg>_IGNORE_CVES variable was forgotten.
Let's re-introduce this, which is now very simple thanks to the CVE
class, its .identifier() propertly and the .is_cve_ignored() method of
the Package class
Cc: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
This commit extends the pkg-stats script to grab information about the
CVEs affecting the Buildroot packages.
To do so, it downloads the NVD database from
https://nvd.nist.gov/vuln/data-feeds in JSON format, and processes the
JSON file to determine which of our packages is affected by which
CVE. The information is then displayed in both the HTML output and the
JSON output of pkg-stats.
To use this feature, you have to pass the new --nvd-path option,
pointing to a writable directory where pkg-stats will store the NVD
database. If the local database is less than 24 hours old, it will not
re-download it. If it is more than 24 hours old, it will re-download
only the files that have really been updated by upstream NVD.
Packages can use the newly introduced <pkg>_IGNORE_CVES variable to
tell pkg-stats that some CVEs should be ignored: it can be because a
patch we have is fixing the CVE, or because the CVE doesn't apply in
our case.
>From an implementation point of view:
- A new class CVE implement most of the required functionalities:
- Downloading the yearly NVD files
- Reading and extracting relevant data from these files
- Matching Packages against a CVE
- The statistics are extended with the total number of CVEs, and the
total number of packages that have at least one CVE pending.
- The HTML output is extended with these new details. There are no
changes to the code generating the JSON output because the existing
code is smart enough to automatically expose the new information.
This development is a collective effort with Titouan Christophe
<titouan.christophe@railnova.eu> and Thomas De Schampheleire
<thomas.de_schampheleire@nokia.com>.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Most, but not all our C code follows the Linux kernel code style (as
documented in Documentation/process/coding-style.rst). Adjust the few
places doing differently:
- Braces:
..but the preferred way, as shown to us by the prophets Kernighan
and Ritchie, is to put the opening brace last on the line
- Spaces after keywords:
Use a space after (most) keywords
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When Buildroot is released, it knows up to a certain kernel header
version, and no later. However, it is possible that an external
toolchain will be used, that uses headers newer than the latest version
Buildroot knows about.
This may also happen when testing a development, an rc-class, or a newly
released kernel, either in an external toolchain, or with an internal
toolchain with custom headers (same-as-kernel, custom version, custom
git, custom tarball).
In the current state, Buildroot would refuse to use such toolchains,
because the test is for strict equality.
We'd like to make that situation possible, but we also want the user not
to be lenient at the same time, and select the right headers version
when it is known.
So, we add a new Kconfig blind option that the latest kernel headers
version selects. This options is then used to decide whether we do a
strict or loose check of the kernel headers.
Suggested-by: Aaron Sierra <asierra@xes-inc.com>
Signed-off-by: Vincent Fazio <vfazio@xes-inc.com>
[yann.morin.1998@free.fr:
- only do a loose check for the latest version
- expand commit log
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Vincent Fazio <vfazio@xes-inc.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Commit c4e6d5c8be ("core: implement
per-package SDK and target") had a mistake on the regexp that is used
to match $(PER_PACKAGE_DIR)/<something>/, and due to this, the regexp
was never matched.
The + sign in [^/]+ which was suggested by Yann E. Morin during the
review of the per-package patch series (instead of [^/]*) needs to be
escaped to be taken into account correctly. Without this, the regexp
doesn't match, and the replacement is not done, causing:
(1) For the libtool fixup in pkg-generic.mk, the lack of replacement
causes libtool .la files to not be tweaked as expected, which it
turn causes build failures reported by the autobuilder.
(2) For the fix-rpath, the RPATH of host binaries in the SDK were not
correct.
Interestingly, we have the same regexp in
support/scripts/check-host-rpath, but here the + sign does not need to
be escaped.
Fixes:
http://autobuild.buildroot.net/results/d4d996f3923699e266afd40cc7180de0f7257d99/ (libsvg-cairo)
http://autobuild.buildroot.net/results/56330f86872f67a2ce328e09b4c7b12aa835a432/ (bind)
http://autobuild.buildroot.net/results/9e0fc42d2c9f856b92954b08019b83ce668ef289/ (ibrcommon)
and probably a number of other similar issues
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
This commit implements the core of the move to per-package SDK and
target directories. The main idea is that instead of having a global
output/host and output/target in which all packages install files, we
switch to per-package host and target directories, that only contain
their explicit dependencies.
There are two main benefits:
- Packages will now see only the dependencies they explicitly list in
their <pkg>_DEPENDENCIES variable, and the recursive dependencies
thereof.
- We can support top-level parallel build properly, because a package
only "sees" its own host directory and target directory, isolated
from the build of other packages that can happen in parallel.
It works as follows:
- A new output/per-package/ directory is created, which will contain
one sub-directory per package, and inside it, a "host" directory
and a "target" directory:
output/per-package/busybox/target
output/per-package/busybox/host
output/per-package/host-fakeroot/target
output/per-package/host-fakeroot/host
This output/per-package/ directory is PER_PACKAGE_DIR.
- The global TARGET_DIR and HOST_DIR variable now automatically point
to the per-package directory when PKG is defined. So whenever a
package references $(HOST_DIR) or $(TARGET_DIR) in its build
process, it effectively references the per-package host/target
directories. Note that STAGING_DIR is a sub-dir of HOST_DIR, so it
is handled as well.
- Of course, packages have dependencies, so those dependencies must
be installed in the per-package host and target directories. To do
so, we simply rsync (using hard links to save space and time) the
host and target directories of the direct dependencies of the
package to the current package host and target directories.
We only need to take care of direct dependencies (and not
recursively all dependencies), because we accumulate into those
per-package host and target directories the files installed by the
dependencies. Note that this only works because we make the
assumption that one package does *not* overwrite files installed by
another package.
This is done for "extract dependencies" at the beginning of the
extract step, and for "normal dependencies" at the beginning of the
configure step.
This is basically enough to make per-package SDK and target work. The
only gotcha is that at the end of the build, output/target and
output/host are empty, which means that:
- The filesystem image creation code cannot work.
- We don't have a SDK to build code outside of Buildroot.
In order to fix this, this commit extends the target-finalize step so
that it starts by populating output/target and output/host by
rsync-ing into them the target and host directories of all packages
listed in the $(PACKAGES) variable. It is necessary to do this
sequentially in the target-finalize step and not in each
package. Doing it in package installation means that it can be done in
parallel. In that case, there is a chance that two rsyncs are creating
the same hardlink or directory at the same time, which makes one of
them fail.
This change to per-package directories has an impact on the RPATH
built into the host binaries, as those RPATH now point to various
per-package host directories, and no longer to the global host
directory. We do not try to rewrite such RPATHs during the build as
having such RPATHs is perfectly fine, but we still need to handle two
fallouts from this change:
- The check-host-rpath script, which verifies at the end of each
package installation that it has the appropriate RPATH, is modified
to understand that a RPATH to $(PER_PACKAGE_DIR)/<pkg>/host/lib is
a correct RPAT.
- The fix-rpath script, which mungles the RPATH mainly for the SDK
preparation, is modified to rewrite the RPATH to not point to
per-package directories. Indeed the patchelf --make-rpath-relative
call only works if the RPATH points to the ROOTDIR passed as
argument, and this ROOTDIR is the global host directory. Rewriting
the RPATH to not point to per-package host directories prior to
this is an easy solution to this issue.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
genimage makes a full copy of the given rootpath to ${GENIMAGE_TMP}/root
so passing TARGET_DIR would be a waste of time and disk space. We don't
rely on genimage to build the rootfs image, just to insert a pre-built
one in the disk image.
Signed-off-by: Carlos Santos <unixmania@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Back a few years ago, when we were starting to think about top-level
parallel build, we were not sure how to deal with packages that
installed the same files, so we wanted to catch the situation to assess
how prevalent that was, before we decided what to do and how to address
it.
However, the trend nowadays is that packages will install in a
per-package target/ (and staging/ and host/), and the final directories
will be assembled in a reproducible (alphabetical) order, so if two
packages install the same file, the last one will win (as is currently
the case).
Besides, check-uniq-files reports loads of spurious errors when packages
get reinstalled (e.g. during development).
Finally, check-uniq-files is the only script called during the build,
that is written in python.
So, get rid of check-uniq-files.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When selected, host-ccache is a dependency of almost all packages.
As such, it clutters the dependency graph uselessly.
Signed-off-by: Francois Perrad <francois.perrad@gadz.org>
Reviewed-by: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The POSIX specification defines a 'trap <action> EXIT' mechanism that is
useful to perform clean-up actions in shell scripts. A trap has two main
advantages over hand-crafted clean-up mechanisms:
- It runs even if the process is terminated by a SIGTERM.
- It runs even if the script stops due to a pipeline failure (set -e).
Now we can make the script to stop immediately if a compilation error
occurs, instead of letting it try to run an unexisting program.
This change may appear to be overkill but Buildroot is an open source
project and each piece of code is a potential learning tool for other
developments. We must strive to provide good examples.
Signed-off-by: Carlos Santos <unixmania@gmail.com>
Acked-by: Yann E. MORIN <yann.morin@orange.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>