This commit improves pkg-stats to fill in pkg.status['cve'] depending
on the situation for CVEs affecting this package. They are then used
in the HTML rendering.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Virtual packages (with in pkg-stats speak have "no valid
infrastructure") and packages that have no version specified cannot be
used for CVE checking. They trigger a bunch of warnings from the CVE
checking code, as it cannot parse their version: they don't have any
version. So instead, we simply skip those packages.
A follow-up commit will improve the reporting to be able to
distinguish those packages from packages that have seen their CVEs
checked and don't have any reported.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit modifies cve.py, as well as its users cve-checker and
pkg-stats to support CPE ID based matching, for packages that have CPE
ID information.
One of the non-trivial thing is that we can't simply iterate over all
CVEs, and then iterate over all our packages to see which packages
have CPE ID information that match the CPEs affected by the
CVE. Indeed, this is an O(n^2) operation.
So instead, we do a pre-filtering of packages potentially affected. In
check_package_cves(), we build a cpe_product_pkgs dict that associates
a CPE product name to the packages that have this CPE product
name. The CPE product name is either derived from the CPE information
provided by the package if available, and otherwise we use the package
name, which is what was used prior to this patch.
And then, when we look at CVEs, we only consider the packages that
have a CPE product name matching the CPE products affected by the
CVEs. This is done in check_package_cve_affects().
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit improves the pkg-stats script to show the CPE ID of
packages, if available. For now, it doesn't use CPE IDs to match CVEs.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The logic in gen-bootlin-toolchains was assuming all glibc toolchains
have RPC support, which is no longer true since glibc 2.32 has dropped
RPC support.
It turns out that gen-bootlin-toolchains already had some proper logic
that selects BR2_TOOLCHAIN_HAS_NATIVE_RPC depending on the presence of
BR2_TOOLCHAIN_EXTERNAL_INET_RPC in the toolchain fragment. As such
toolchain fragments have been fixed in https://toolchains.bootlin.com,
we can now rely on this to properly decide if the toolchain has RPC
support or not.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When boot-qemu-image.py script was added, we wanted to run
each qemu defconfig in gitlab, so we expect that all qemu
defconfig generate the script start-qemu.sh in images
directory.
Don't make it a hard requirement even if we prefer to be
able to do a runtime test for each qemu defconfig.
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Currently, when the version encoded in a CPE is '-', we assume all
versions are affected, but when it's '*' with no further range
information, we assume no version is affected.
This doesn't make sense, so instead, we handle '*' and '-' in the same
way. If there's no version information available in the CVE CPE ID, we
assume all versions are affected.
This increases quite a bit the number of CVEs and package affected:
- "total-cves": 302,
- "pkg-cves": 100,
+ "total-cves": 597,
+ "pkg-cves": 135,
For example, CVE-2007-4476 has a CPE ID of:
cpe:2.3🅰️gnu:tar:*:*:*:*:*:*:*:*
So it should be taken into account. In this specific case, it is
combined with an AND with CPE ID
cpe:2.3⭕suse:suse_linux:10:*:enterprise_server:*:*:*:*:* but since
we don't support this kind of matching, we'd better be on the safe
side, and report this CVE as affecting tar, do an analysis of the CVE
impact, and document it in TAR_IGNORE_CVES.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reviewed-by: Matt Weber <matthew.weber@rockwellcollins.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Now that pkg-stats is able to generate its output based on the list of
packages enabled in the current configuration, cve-checker doesn't
serve any purpose.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
pkg-stats was initially a Buildroot maintenance oriented tool: it was
designed to examine all Buildroot packages and provide
statistics/details about them.
However, it turns out that a number of details provided by pkg-stats,
especially CVEs, are relevant also for Buildroot users, who would like
to check regularly if their specific Buildroot configuration is
affected by CVEs or not, and possibly check if all packages have
license information, license files, etc.
The cve-checker script was recently introduced to provide an output
relatively similar to pkg-stats, but focused on CVEs only.
But in fact, its main difference is on the set of packages that we
consider: pkg-stats considers all packages, while cve-checker uses
"make show-info" to only consider packages enabled in the current
configuration.
So, this commit introduces a -c option to pkg-stats, to tell pkg-stats
to generate its output based on the list of configured packages. -c is
mutually exclusive with the -p option (explicit list of packages) and
-n option (a number of packages, picked randomly).
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently, pkg-stats expects being executed from Buildroot's top-level
source directory. As we are going to extend pkg-stats to cover only
the packages available in the current configuration, it makes sense to
be able to run it from the output directory, which can be anywhere
compared to Buildroot's top-level directory.
This commit adjusts pkg-stats to this, by inferring all Buildroot
paths based on the location of the pkg-stats script itself.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Extract from bug report:
"Code line 120 to line 128 is to check whether the patch containing
"rename from" and "rename to". But it directly use grep to find,
ignoring the patch may be a tar file or else. It can only work on patch
of textfile form."
Fixes:
- https://bugs.buildroot.org/show_bug.cgi?id=11931
Signed-off-by: Fabrice Fontaine <fontaine.fabrice@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The 2020.08-1 release of Bootlin toolchains has brought support for 3
additional architecture variants, so let's support them.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
glibc toolchains must be disabled for static only configuration.
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Some externals may wish to provide custom init systems for tightly
integrated boot. This has been supported through the BR2_INIT_NONE,
however a downside to the BR2_INIT_NONE is it forces the custom init
system to use either skeleton-custom and roll a custom skeleton for
each target, or skeleton-init-none which isn't a complete skeleton.
Allowing br2-external to define custom BR2_INIT_* means they can now
safely 'select' the BR2_PACKAGE_SKELETON_INIT_*, and re-use any of the
skeletons in Buildroot, or one from a br2-external tree.
Signed-off-by: Brandon Maier <brandon.maier@rockwellcollins.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Today, the BR2_ROOTFS_SKELETON_CUSTOM is the only way to build a custom
skeleton. But it's limiting as users must provide a pre-built skeleton
for each target. Supporting a br2-external package allows users to build
up a skeleton and customize it with their own KConfig options.
Signed-off-by: Brandon Maier <brandon.maier@rockwellcollins.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
apply-patches currently blindly removes *.orig / .*.orig files as GNU patch
by default writes these as backup files when patches only apply with fuzz.
This is unfortunate as package sources may contain files ending in .orig as
well, breaking the build. Luckily GNU patch can be told to not write these
backup files using the --no-backup-if-mismatch option, so used that instead
of the .orig removal step.
--no-backup-if-mismatch is supported since GNU patch 2.3.8 (1997-06-17) and
busybox patch if built with CONFIG_DESKTOP, but E.G. isn't supported by the
BSD patch, so add logic to dependencies.sh to error out if patch doesn't
support the flag.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Currently, we handle three kinds of tests: basic, defconfig, and
runtime, and we treat them totally independently ones from the others.
Except for the basic tests that are ignored when defconfig or runtime
tests are explicitly requested.
The basic tests are also run systematically on all our reference
branches: master, next (when it exists), and the maintenance branches:
YYYY.MM.x.
Furthermore, we can see that the conditions to run each set of tests
are very similar, with only the explicit queries differing by name.
Rework the script so that the conditions are expressed only once, and
each set of tests is decided for each condition. This makes it easier
to decide what tests should run under what conditions.
Using GitLab-CI's schedules, with a variable expressing the actual test
to run, would seem the obvious choice to trigger the pipelines. However,
a schedule is configured for a specific branch, which means we would
need one schedule per branch we want to build per test cases we want to
run, *and* that we update those schedules when we add/remove branches
(e.g. when we open/close 'next', or a maintenance branch). This is not
very nice, as it requires some manual tweaking and twiddling on the web
UI.
Instead, we resort to using triggers, that will be triggered from a
cronjob on some server. Using a cronjiob allows us to more easily manage
the branches we want to test and test cases we want to run, to more
easily spread the load over the week, etc...
Note: triggering a pipeline can be done with a simple curl invocation:
$ curl -X POST \
-F "token=${YOUR_TOKEN}" \
-F "ref=${BRANCH_TO_TEST}" \
-F "variables[BR_SCHEDULE_JOBS]=${TEST_TO_RUN}" \
"https://gitlab.com/api/v4/projects/${YOUR_PROJECT_ID}/trigger/pipeline"
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Commit 9e4ffdc8cf modified the output of
'setlocalversion' so that the Buildroot version tag is included in the
output, the version part was added in Makefile.
Due to differences in behavior of the used git and Mercurial commands, this
caused different output for the Mercurial case, in BR2_VERSION_FULL and thus
/etc/os-release and 'make print-version'. Assuming the official Buildroot
releases are tagged and no project-specific tags are present, the output
after commit 9e4ffdc8cf is:
-hg<commit>
whereas it is expected to be something like:
2020.02.6-hg<commit>
Change the Mercurial case in setlocalversion to behave similar to git,
looking up the latest tag if the current revision is not itself tagged.
The number of commits after the latest tag is not added, unlike in git, as
this value is not commonly present in Mercurial output, and its added value
can be disputed in this context. Even one commit could bring a huge change
to the sources, so in order to interpret the number one has to look at the
repository anyhow, in which case the commit ID can just be used.
Signed-off-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When multiple conditions match simultaneously, even though that should
not happen in practice, we want the more "important" one to win over
the less "important" ones. For example, a tag is more important than a
branch name or a trigger.
Currently, the latest condition to match takes precendence over any
previous one, while we want the exact opposite.
Fix that with proper fallbacks in else-blocks.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Add a new option that prints the (runtime) path of compiled .py files
when VERBOSE=1 is set.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When generating a .pyc file, the original .py source file path is
encoded in it. It is used for various purposes: traceback generation,
.pyc file comparison with its .py source, and code inspection.
By default, the source path used when invoking compileall is encoded in
the .pyc file. Since we use paths relative to TARGET_DIR, we end up with
paths that are only valid when relative to '/' encoded in the installed
.pyc files on the target.
This breaks code inspection at runtime since the original source path
will be invalid unless the code is executed from '/'.
Unfortunately, compileall cannot be forced to use the proper path. It
was not written with cross-compilation usage in mind.
Rework the script to call py_compile.compile() directly with pertinent
options:
- The script now has a new --strip-root argument. This argument is
optional but will always be specified when compiling py files in
buildroot.
- All other (non-optional) arguments are folders in which all
"importable" .py files will be compiled to .pyc.
- Using --strip-root=$(TARGET_DIR), the future runtime path of each .py
file is computed and encoded into the compiled .pyc.
No need to change directory before running the script anymore.
The trickery used to handle error reporting was only applicable with
compileall. Since we implement our own "compileall", error reporting
becomes trivial.
Previously, we had a --force option to tell compileall.compiledir() to
forcibly recompile files if they had changed. Now, we would have to
handle it ourselves. It turns out to not be easy and would need us to
delve into the format of bytecompiled files to extract metadata and
compare it with the expected values, that being even dependent on the
python version being used (fortunately, only two for us: python 2.7 and
the latext 3.x).
Still, this is deemed too complex, and byte-compiling is pretty fast, so
much so that it should be eclipsed by the build duration anyway.
So we just drop support for --force, and instead we always byte-compile.
Signed-off-by: Julien Floret <julien.floret@6wind.com>
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
[yann.morin.1998@free.fr:
- always byte-compile
- drop --force
- expand commit log to state so and explain why
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Only run code when the script is executed directly (not imported).
Factorize command description by using the script's __doc__ variable.
Fix typo in --force help message.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Currently, the check of defconfigs is run for all branches, even those
that are pushed only to run runtime tests. This is very inconvenient.
In fact, we only want to check the defconfigs on standard branches, that
is master, next, and the maintenance branches.
This will also decrease drastically the number gitlab-ci minutes used
when one pushes their repo to gitlab.com, where the number of CI minutes
are now going to be pretty severely restricted.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that those tests were so far ignored only when requesting a single
defconfig build, or a single runtime test build; everything else
was trigerring thoses tests.
However, it feels more natural that they are also ignored when all
defconfigs build. or all runtime tests, are explictly requested.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that we do not propagate the existing comment, because it is
partially wrong; instead we just keep the per-condition comments.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When we build the defconfigs, we already check they are correct, so
there is no need to run the correctness check explicitly.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Acked-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that we do not propagate the existing comment, because it is
partially wrong; instead we just keep the per-condition comments.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Currently, the image name and version are duplicated in the main
pipeline and the generated, child pipeline.
This is a condition for a future gaffe, so let's use the image from the
main pipeline when generating the child one.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This script is currently very crude, but we're going to extend it, at
which point it will be nicer to have functions, local variables, et al.
Introduce a main() in preparation of those future evolutions.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that one is silenced, rather than fixed: we indeed need to import
after we add the local directory to the modules search path.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Current X.org X server is incompatible with this driver.
We no longer support unmaintainted versions of X.org X server.
Signed-off-by: Bernd Kuhls <bernd.kuhls@t-online.de>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Drop the debug-level print as noticed by Titouan.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Titouan Christophe <titouan.christophe@railnova.eu>
https://toolchains.bootlin.com/ has been providing for a few years a
number of ready-to-use pre-built toolchains, for a wide range of
architectures (which it turns out, are all built using Buildroot).
While toolchains.bootlin.com provides Buildroot config fragments to
easily use those toolchains with Buildroot (see [0] for example), this
is not visible anywhere. So instead, we would like to add support for
these toolchains in Buildroot just like we have existing support for
Linaro, ARM, Synopsys, etc. toolchains.
[0] https://toolchains.bootlin.com/downloads/releases/toolchains/aarch64/fragments/aarch64--glibc--bleeding-edge-2020.02-2.frag
However, the number of toolchains provided by toolchains.bootlin.com
is really large, and they are regularly updated. Maintaining that
manually would be time consuming and error-prone. So instead, this
commit introduces a script that automatically generates:
- toolchain/toolchain-external/toolchain-external-bootlin/Config.in.options
- toolchain/toolchain-external/toolchain-external-bootlin/toolchain-external-bootlin.mk
- toolchain/toolchain-external/toolchain-external-bootlin/toolchain-external-bootlin.hash
- support/testing/tests/toolchain/test_external_bootlin.py
We create a single external toolchain package, with a Kconfig "choice"
as a sub-option to select the toolchain variant to be used. The script
contains a Python dict that provides the mapping between the
toolchains provided by toolchains.bootlin.com, and the architecture
options/variants they are applicable to.
The test cases allow to verify that the toolchain configuration is
correct, and that it is able to build a Busybox based system. It
doesn't do any runtime testing as such testing is already done by
toolchains.bootlin.com: the test cases here are only meant to verify
that the toolchain-external-bootlin package works as expected.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reviewed-by: Titouan Christophe <titouan.christophe@railnova.eu>
Tested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This scripts takes as entry on stdin a JSON description of the package
used for a given configuration. This description is the one generated
by "make show-info".
The script generates the list of all the packages used and if they are
affected by a CVE. The output is either a JSON or an HTML file similar
to the one generated by pkg-stats.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com>=
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The affects method of the CVE uses the Package class defined in
pkg-stats. The purpose of migrating the CVE class outside of pkg-stats
was to be able to reuse it from other scripts. So let's remove the
Package dependency and only use the needed information.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
In 2019, the JSON vulnerability feeds switched their schema from
version 1.0 to 1.1.
The main difference is the removal of the "affects" element that we
were using to check if a package was affected by a CVE.
This information is now available in the "configuration" element which
contains the cpeid as well as properties about the versions
affected. Instead of having a list of the versions affected, with
these properties, it is possible to have a range of versions.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
In order to be able to use the CVE checking logic outside of
pkg-stats, move the CVE class in a module that can be used by other
scripts.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Some CVE entries in the NVD database have version_value set to "-",
which seems to indicate that it applies to all versions of the
software project, or that they don't really know which versions are
affected, and which are not.
So, for the benefit of doubt, it seems more appropriate to consider
such CVEs as affecting our packages.
This makes the total number of CVEs affecting our next branch jump
from 141 CVEs to 658 CVEs, but that number will go back down once we
switch to the JSON 1.1 schema. Indeed, in the JSON 1.0 schema, there
are often cases where a version_value is set to "=" *and* specific
versions are set to.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit slightly improves the output of pkg-stats by showing the
progress of the upstream URL checks and latest version retrieval, on a
package basis:
Checking URL status
[0001/0062] curlpp
[0002/0062] cmocka
[0003/0062] snappy
[0004/0062] nload
[...]
[0060/0062] librtas
[0061/0062] libsilk
[0062/0062] jhead
Getting latest versions ...
[0001/0064] libglob
[0002/0064] perl-http-daemon
[0003/0064] shadowsocks-libev
[...]
[0061/0064] lua-flu
[0062/0064] python-aiohttp-security
[0063/0064] ljlinenoise
[0064/0064] matchbox-lib
Note that the above sample was run on 64 packages. Only 62 packages
appear for the URL status check, because packages that do not have any
URL in their Config.in file, or don't have any Config.in file at all,
are not checked and therefore not accounted.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit reworks the code that checks if the upstream URL of each
package (specified by its Config.in file) using the aiohttp
module. This makes the implementation much more elegant, and avoids
the problematic multiprocessing Pool which is causing issues in some
situations.
Suggested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit reworks the code that retrieves the latest upstream
version of each package from release-monitoring.org using the aiohttp
module. This makes the implementation much more elegant, and avoids
the problematic multiprocessing Pool which is causing issues in some
situations.
Since we're now using some async functionality, the script is Python
3.x only, so the shebang is changed to make this clear.
Suggested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Remove patch that is no longer needed as of upstream commit
1c33be992e8120abd20add8021e4d91d226f5b6a which removed the old VM.
We need to add an exclusion rule for guile modules to check-bin-arch
as they appear as valid ELF binaries but with an architecture of
"None".
Signed-off-by: James Hilliard <james.hilliard1@gmail.com>
[Thomas:
- bump to 3.0.4
- rework how check-bin-arch excludes checking the Guile .go files]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
BR2_VERSION_FULL is currently defined as follows:
BR2_VERSION_FULL := $(BR2_VERSION)$(shell $(TOPDIR)/support/scripts/setlocalversion)
This BR2_VERSION_FULL value then gets used as the "VERSION" variable
in the /etc/os-release file.
The logic of "setlocalversion" is that if it is exactly on a tag, it
returns nothing.
If it is on a tag + a number of commits, then it returns only
-XYZ-gABC where XYZ is the number of commits since the last tag, and
ABC the git commit hash (these are extracted from git describe).
This output then gets concatenated to BR2_VERSION which gives
something like 2020.05 or 2020.05-00123-g5bc6a.
The issue is that when you're on a tag specific to your project, which
is not a Buildroot YYYY.MM tag, then the output of setlocalversion is
empty, and all you get as VERSION in os-release is $(BR2_VERSION)
which is not really nice. Worse, if you have another non-official
Buildroot tag between the last official Buildroot tag/version and
where you are, you will get $(BR2_VERSION)-XYZ-gABC, but XYZ will not
correspond to the number of commits since BR2_VERSION, but since the
last tag that "git describe" as found, which is clearly incorrect.
Here is an example: you're on master, "make print-version" (which
displays BR2_VERSION_FULL) will show:
$ make print-version
2020.08-git-00758-gc351877a6e
So far so good. Now, you create a tag say 5 commits "before" master,
and show BR2_VERSION_FULL again:
$ git tag -a -m "dummy tag" dummy-tag HEAD~5
$ make print-version
2020.08-git-00005-gc351877a6e
This makes you believe you are 5 commits above 2020.08, which is
absolutely wrong.
So this commit simplifies the logic of setlocalversion to simply
return what "git describe" provides, and not prepend $(BR2_VERSION) in
the main Makefile. Since official Buildroot tags match official
Buildroot version names, you get the same output when you're on an
official Buildroot tag, or some commits above a Buildroot tag. An in
other cases, you get a sensible output. The logic is also adjusted for
the Mercurial case.
In the above situation, with this commit applied, we get:
$ make print-version
dummy-tag-6-g6258cdddeb
(6 commits instead of 5 as we have this very commit applied, but at
least it's 6 commits on top of the dummy-tag)
Finally, if you're not using a version control system, setlocalversion
was already returning nothing, so in this case, the Makefile simply
sets BR2_VERSION_FULL to BR2_VERSION to preserve this behavior.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The defconfig check has been introduced by the previous
patch before the building each defconfig but those builds
are done every week or more.
Checking if a defconfig is valid can be done on every
push in the repository since it take few seconds.
This would allow to detect as soon as possible a problem
in a defconfig and eventually avoid breaking the build
while build testing all defconfig.
Introduce a new job template ".defconfig_check" in
gitlab-ci.yml.in and modify the generate-gitlab-ci-yml
to create a job for each defconfig to run the test.
Although, we could have used only one job to do all
tests, using one job per defconfig allow to identify
easily in gitlab which defconfig is falling.
Tested:
https://gitlab.com/kubu93/buildroot/pipelines/138331069https://gitlab.com/kubu93/buildroot/pipelines/171223758
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
For the same reason as for 50b747f212,
we need to check if the generated configuration file (.config)
contains all symbols present in the defconfig file.
If not there is an issue with the defconfig.
This script will be used in .gitlab-ci.yml.
Inspired by is_toolchain_usable() function from genrandconfig:
https://git.busybox.net/buildroot/tree/utils/genrandconfig?h=2020.02#n164
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
[yann.morin.1998@free.fr:
- strip defconfig lines when reading them
- use a generator to read the defconfig lines
- no need to strip() again when building the missing list
- testing the list directly, not its len()
- simply sys.exit(1) in the error condition
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This fixes the following flake8 warning:
support/scripts/pkg-stats:1005:9: E117 over-indented
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
With python 3, when a package has a version number x-y-z instead of
x.y.z, then the version returned by LooseVersion can't be compared
which raises a TypeError exception:
Traceback (most recent call last):
File "./support/scripts/pkg-stats", line 1062, in <module>
__main__()
File "./support/scripts/pkg-stats", line 1051, in __main__
check_package_cves(args.nvd_path, {p.name: p for p in packages})
File "./support/scripts/pkg-stats", line 613, in check_package_cves
if pkg_name in packages and cve.affects(packages[pkg_name]):
File "./support/scripts/pkg-stats", line 386, in affects
return pkg_version <= cve_affected_version
File "/usr/lib64/python3.8/distutils/version.py", line 58, in __le__
c = self._cmp(other)
File "/usr/lib64/python3.8/distutils/version.py", line 337, in _cmp
if self.version < other.version:
TypeError: '<' not supported between instances of 'str' and 'int'
This patch handles this exception by adding a new return value when
the comparison can't be done. The code is adjusted to take of this
change. For now, a return value of CVE_UNKNOWN is handled the same way
as a CVE_DOESNT_AFFECT return value, but this can be improved later
on.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The error is misleading: it reports that no name was provided,
when in fact the external.desc file is missing.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>p
Reviewed-by: Romain Naour <romain.naour@gmail.com>
When a br2-external tree has an issue, e.g. a missing file, or does not
have a name, or the name uses invalid chars, we report that condition by
setting the variable BR2_EXTERNAL_ERROR.
That variable is defined in the script support/scripts/br2-external,
which outputs it on stdout, and checked by the Makefile.
Before d027cd75d0, stdout was explicitly redirected to the generated
.mk file, with exec >"${ofile}" as the Makefile and Kconfig
fragments were generated each with their own call to the script, and
the validation phase would emit the BR2_EXTERNAL_ERROR variable in the
Makefile fragment.
But with d027cd75d0, both the Makefile and Kconfig fragments were now
generated with a single call to the script, and as such the semantics of
the scripts changed, and only each of the actual generators, do_mk and
do_kconfig, had their out put redirected. Which left do_validate with
the default stdout. Which would emit BR2_EXTERNAL_ERROR on stdout.
In turn, the stdout of the script would be interpreted by as part of the
Makefile. But this does not end up very well when a br2-external tree
indeed has an error:
- missing a external.desc file:
Makefile:184: *** multiple target patterns. Stop.
- empty external.desc file:
Config.in:22: can't open file "output/.br2-external.in.paths"
So we must redirect the output of the validation step to the
Makefile fragment, so that the error message is correctly caught by the
top-level Makefile.
Note that we don't need to append in do_mk, and we can do an overwrite
redirection: if we go so far as to call do_mk, it means there was no
error, and thus the fragment is empty.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Tested-by: Romain Naour <romain.naour@gmail.com>
As reported by a gitlab runtime test [1] and on the mailing list
[2], some runtime tests are failing on slow host machines when
the qemu-system-<arch> is missing on the host.
The boot-qemu-image.py script need to wait some time after
calling pexpect.spawn() in order to make sure that the qemu
process has been executed in start-qemu.sh.
If start-qemu.sh failed due to missing qemu-system binary
an exception will be thrown by child.expect() and should be
catched by the error handling (pexpect.EOF).
After spending a lot of time to investigate with Yann E. MORIN
[3]. It seems that short-lived child processes are a corner-case
that is not very correctly handled...
Without adding a sleep(1), child.expect() can trigger an
exception before setting the exitstatus of the spawned
process. This issue can be reproduced on a gitlab runner or
by adding "exit 1" in the first line of start-qemu.sh
(after the shebang).
There is even the same workaround in some pexpect examples [4].
Thanks to Yann for the help while investigating the issue.
Tested:
https://gitlab.com/kubu93/buildroot/pipelines/138472925
[1] https://gitlab.com/kubu93/buildroot/pipelines/135487475
[2] http://lists.busybox.net/pipermail/buildroot/2020-April/280037.html
[3] http://patchwork.ozlabs.org/project/buildroot/patch/20200418161023.1221799-1-romain.naour@gmail.com/
[4] https://github.com/pexpect/pexpect/blob/master/examples/ssh_tunnel.py#L80
Fixes:
https://gitlab.com/kubu93/buildroot/-/jobs/509053135
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
[yann.morin.1998@free.fr: reorder imports]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This script is intended to be used by gitlab CI to test at runtime Qemu
images generated by Buildroot's Qemu defconfigs.
This allows to troubleshoot different issues that may be associated with
defective builds by lanching a qemu machine, sending root password,
waiting for login shell and then perform a shutdown.
This script is inspired by toolchain builder [1] and the Buildroot
testing infrastructure.
The gitlab CI will call this script for each defconfig build but only
Qemu defconfig will be runtime tested, all others defconfig are ignored.
Some Qemu defconfig must be used with a specific Qemu version (fork)
that is not always available, so the script doesn't error out when it
can't spawn a missing command. That condition is anyway printed in the
log.
Finally, the script start Qemu like it's done for the Buildroot
testing infrastructure (using pexpect).
Note:
We noticed some timeout issues with pexpect when the Qemu machine is
powered off. That's because Qemu process doesn't stop even if the
system is halted (after "System halted"). So the script doesn't error
out when such timeout occure. The behaviour depends on the architecture
emulated by Qemu.
[1] https://github.com/bootlin/toolchains-builder/blob/master/build.sh
Signed-off-by: Jugurtha BELKALEM <jugurtha.belkalem@smile.fr>
Signed-off-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When the 'nvd-path', 'json' and 'html' are used like this:
--html ~/foo
then the tilde expansion is properly done by the shell. However, when
they are used like this:
--html=~/foo
The shell doesn't do the tilde expansion, and pkg-stats doesn't do
it. This commit modifies pkg-stats to ensure that tilde expansion is
done when parsing the 'nvd-path', 'json' and 'html' arguments.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
[Thomas: improve commit log]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
flake8 complains with:
support/scripts/pkg-stats:339:13: E722 do not use bare 'except'
Due to the construct:
try:
something
except:
print("some message")
raise
Which is in fact OK because the exception is re-raised. This issue is
discussed at https://github.com/PyCQA/pycodestyle/issues/703, and the
general agreement is that these "bare except" are OK, and should be
ignored from flake8 using a noqa statement.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
flake8 complains with:
pkg-stats:38:1: E402 module level import not at top of file
This is due to sys.path.append() being before the import from
getdeveloperlib, but we really need this sys.path.append() to be
before, so let's ignore this flake8 warning.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The C program inside check-kernel-headers.sh has two checking mode: a
strict and a loose one.
In strict mode, we want the kernel headers version declared by the
user to match exactly the one of the toolchain.
In loose mode, we want the kernel headers version of the toolchain to
be greater than or equal to the one declared by the user: this is used
when we have a toolchain that has newer headers than the latest
version known by Buildroot.
However, in loose mode, we continue to show the "Incorrect kernel
headers version" message, even though we then return a zero error
code. This is very confusing: you see an error displayed on the
terminal, but the build goes on.
We fix that by first doing the loose check first, and returning 0 if
it succeeds. And then we move on with the strict check where we want
the version to be identical.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
If there is no infra set or infra is virtual the status is set to 'na'.
This is done for the follwing checks:
- license
- license-files
- hash
- hash-license
- patches
- version
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This value can be used for later processing.
In the buildroot-stats application this is used to create links pointing
to the git repo of buildroot.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Unify the status check information. The status is stored in a tuple. The
first entry is the status that can be 'ok', 'warning' or 'error'. The
second entry is a verbose message.
The following checks are performed:
- url: status of the URL check
- license: status of the license presence check
- license-files: status of the license file check
- hash: status of the hash file presence check
- patches: status of the patches count check
- pkg-check: status of the check-package script result
- developers: status if a package has developers in the DEVELOPERS file
- version: status of the version check
With that status information the following variables are replaced:
has_license, has_license_files, has_hash, url_status
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Use the function 'parse_developers' function from getdeveloperlib that
collect the information about the developers and the files they
maintain. Then set the maintainer(s) to each package.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Remove the patch_count attribute and use a class property instead.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This patch changes the type of the latest_version variable to a dict.
This is for better readability/usability of the data. With this the json
output is more descriptive in later processing of the json output.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
During the CVE checking phase, we can still see a huge amount of
Python processes (actually 128) running on the host, even though
the CVE step is entirely ran in the main thread.
These are actually the worker processes spawned to check for the
packages URL statuses and the latest versions from release-monitoring.
This is because of an issue in Python's multiprocessing implementation:
https://bugs.python.org/issue34172
The problem was already there before the CVE matching step was
introduced, but because pkg-stat was terminating right after the
release-monitoring step, it went unnoticed.
Also, do not hold a reference to the multiprocessing pool from
the Package class, as this is not needed.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In Python 3, the functions from the subprocess module return bytes
(and no longer strings as in Python 2), which must be decoded for
further text operations.
Now, pkg-stats can be run in Python 3.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
It seems like throughout the series that the CVE pkg-stats support
went through, the support for ignoring CVEs in the per-package
<pkg>_IGNORE_CVES variable was forgotten.
Let's re-introduce this, which is now very simple thanks to the CVE
class, its .identifier() propertly and the .is_cve_ignored() method of
the Package class
Cc: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
During the CVE checking phase, we can still see a huge amount of
Python processes (actually 128) running on the host, even though
the CVE step is entirely ran in the main thread.
These are actually the worker processes spawned to check for the
packages URL statuses and the latest versions from release-monitoring.
This is because of an issue in Python's multiprocessing implementation:
https://bugs.python.org/issue34172
The problem was already there before the CVE matching step was
introduced, but because pkg-stat was terminating right after the
release-monitoring step, it went unnoticed.
Also, do not hold a reference to the multiprocessing pool from
the Package class, as this is not needed.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In Python 3, the functions from the subprocess module return bytes
(and no longer strings as in Python 2), which must be decoded for
further text operations.
Now, pkg-stats can be run in Python 3.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The NVD files that are used to build the list of CVEs affecting
Buildroot packages are quite large (a few hundreds MB of json),
and cause the pkg-stats scripts to have a huge memory footprint
(a few GB with Python 2.7).
However, because we only need to iterate on CVE items one by one,
we can process them in streaming (ie decoding one CVE at a time
from the JSON representation). Because the json module from the
python standard library does not support such a mode of operation,
we switch to the third-party package ijson, which is compatible
with both Python 2 and Python3.
To run the script with these modifications, one should install
the ijson python package. This can be done with pip:
`pip install ijson`. On Debian based distributions, this can
also be done with the apt package manager:
`apt install python-ijson`.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Reviewed-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Tested-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The NVD files that are used to build the list of CVEs affecting
Buildroot packages are quite large (a few hundreds MB of json),
and cause the pkg-stats scripts to have a huge memory footprint
(a few GB with Python 2.7).
However, because we only need to iterate on CVE items one by one,
we can process them in streaming (ie decoding one CVE at a time
from the JSON representation). Because the json module from the
python standard library does not support such a mode of operation,
we switch to the third-party package ijson, which is compatible
with both Python 2 and Python3.
To run the script with these modifications, one should install
the ijson python package. This can be done with pip:
`pip install ijson`. On Debian based distributions, this can
also be done with the apt package manager:
`apt install python-ijson`.
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Reviewed-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Tested-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
It seems like throughout the series that the CVE pkg-stats support
went through, the support for ignoring CVEs in the per-package
<pkg>_IGNORE_CVES variable was forgotten.
Let's re-introduce this, which is now very simple thanks to the CVE
class, its .identifier() propertly and the .is_cve_ignored() method of
the Package class
Cc: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
This commit extends the pkg-stats script to grab information about the
CVEs affecting the Buildroot packages.
To do so, it downloads the NVD database from
https://nvd.nist.gov/vuln/data-feeds in JSON format, and processes the
JSON file to determine which of our packages is affected by which
CVE. The information is then displayed in both the HTML output and the
JSON output of pkg-stats.
To use this feature, you have to pass the new --nvd-path option,
pointing to a writable directory where pkg-stats will store the NVD
database. If the local database is less than 24 hours old, it will not
re-download it. If it is more than 24 hours old, it will re-download
only the files that have really been updated by upstream NVD.
Packages can use the newly introduced <pkg>_IGNORE_CVES variable to
tell pkg-stats that some CVEs should be ignored: it can be because a
patch we have is fixing the CVE, or because the CVE doesn't apply in
our case.
>From an implementation point of view:
- A new class CVE implement most of the required functionalities:
- Downloading the yearly NVD files
- Reading and extracting relevant data from these files
- Matching Packages against a CVE
- The statistics are extended with the total number of CVEs, and the
total number of packages that have at least one CVE pending.
- The HTML output is extended with these new details. There are no
changes to the code generating the JSON output because the existing
code is smart enough to automatically expose the new information.
This development is a collective effort with Titouan Christophe
<titouan.christophe@railnova.eu> and Thomas De Schampheleire
<thomas.de_schampheleire@nokia.com>.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Most, but not all our C code follows the Linux kernel code style (as
documented in Documentation/process/coding-style.rst). Adjust the few
places doing differently:
- Braces:
..but the preferred way, as shown to us by the prophets Kernighan
and Ritchie, is to put the opening brace last on the line
- Spaces after keywords:
Use a space after (most) keywords
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When Buildroot is released, it knows up to a certain kernel header
version, and no later. However, it is possible that an external
toolchain will be used, that uses headers newer than the latest version
Buildroot knows about.
This may also happen when testing a development, an rc-class, or a newly
released kernel, either in an external toolchain, or with an internal
toolchain with custom headers (same-as-kernel, custom version, custom
git, custom tarball).
In the current state, Buildroot would refuse to use such toolchains,
because the test is for strict equality.
We'd like to make that situation possible, but we also want the user not
to be lenient at the same time, and select the right headers version
when it is known.
So, we add a new Kconfig blind option that the latest kernel headers
version selects. This options is then used to decide whether we do a
strict or loose check of the kernel headers.
Suggested-by: Aaron Sierra <asierra@xes-inc.com>
Signed-off-by: Vincent Fazio <vfazio@xes-inc.com>
[yann.morin.1998@free.fr:
- only do a loose check for the latest version
- expand commit log
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Vincent Fazio <vfazio@xes-inc.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Commit c4e6d5c8be ("core: implement
per-package SDK and target") had a mistake on the regexp that is used
to match $(PER_PACKAGE_DIR)/<something>/, and due to this, the regexp
was never matched.
The + sign in [^/]+ which was suggested by Yann E. Morin during the
review of the per-package patch series (instead of [^/]*) needs to be
escaped to be taken into account correctly. Without this, the regexp
doesn't match, and the replacement is not done, causing:
(1) For the libtool fixup in pkg-generic.mk, the lack of replacement
causes libtool .la files to not be tweaked as expected, which it
turn causes build failures reported by the autobuilder.
(2) For the fix-rpath, the RPATH of host binaries in the SDK were not
correct.
Interestingly, we have the same regexp in
support/scripts/check-host-rpath, but here the + sign does not need to
be escaped.
Fixes:
http://autobuild.buildroot.net/results/d4d996f3923699e266afd40cc7180de0f7257d99/ (libsvg-cairo)
http://autobuild.buildroot.net/results/56330f86872f67a2ce328e09b4c7b12aa835a432/ (bind)
http://autobuild.buildroot.net/results/9e0fc42d2c9f856b92954b08019b83ce668ef289/ (ibrcommon)
and probably a number of other similar issues
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
This commit implements the core of the move to per-package SDK and
target directories. The main idea is that instead of having a global
output/host and output/target in which all packages install files, we
switch to per-package host and target directories, that only contain
their explicit dependencies.
There are two main benefits:
- Packages will now see only the dependencies they explicitly list in
their <pkg>_DEPENDENCIES variable, and the recursive dependencies
thereof.
- We can support top-level parallel build properly, because a package
only "sees" its own host directory and target directory, isolated
from the build of other packages that can happen in parallel.
It works as follows:
- A new output/per-package/ directory is created, which will contain
one sub-directory per package, and inside it, a "host" directory
and a "target" directory:
output/per-package/busybox/target
output/per-package/busybox/host
output/per-package/host-fakeroot/target
output/per-package/host-fakeroot/host
This output/per-package/ directory is PER_PACKAGE_DIR.
- The global TARGET_DIR and HOST_DIR variable now automatically point
to the per-package directory when PKG is defined. So whenever a
package references $(HOST_DIR) or $(TARGET_DIR) in its build
process, it effectively references the per-package host/target
directories. Note that STAGING_DIR is a sub-dir of HOST_DIR, so it
is handled as well.
- Of course, packages have dependencies, so those dependencies must
be installed in the per-package host and target directories. To do
so, we simply rsync (using hard links to save space and time) the
host and target directories of the direct dependencies of the
package to the current package host and target directories.
We only need to take care of direct dependencies (and not
recursively all dependencies), because we accumulate into those
per-package host and target directories the files installed by the
dependencies. Note that this only works because we make the
assumption that one package does *not* overwrite files installed by
another package.
This is done for "extract dependencies" at the beginning of the
extract step, and for "normal dependencies" at the beginning of the
configure step.
This is basically enough to make per-package SDK and target work. The
only gotcha is that at the end of the build, output/target and
output/host are empty, which means that:
- The filesystem image creation code cannot work.
- We don't have a SDK to build code outside of Buildroot.
In order to fix this, this commit extends the target-finalize step so
that it starts by populating output/target and output/host by
rsync-ing into them the target and host directories of all packages
listed in the $(PACKAGES) variable. It is necessary to do this
sequentially in the target-finalize step and not in each
package. Doing it in package installation means that it can be done in
parallel. In that case, there is a chance that two rsyncs are creating
the same hardlink or directory at the same time, which makes one of
them fail.
This change to per-package directories has an impact on the RPATH
built into the host binaries, as those RPATH now point to various
per-package host directories, and no longer to the global host
directory. We do not try to rewrite such RPATHs during the build as
having such RPATHs is perfectly fine, but we still need to handle two
fallouts from this change:
- The check-host-rpath script, which verifies at the end of each
package installation that it has the appropriate RPATH, is modified
to understand that a RPATH to $(PER_PACKAGE_DIR)/<pkg>/host/lib is
a correct RPAT.
- The fix-rpath script, which mungles the RPATH mainly for the SDK
preparation, is modified to rewrite the RPATH to not point to
per-package directories. Indeed the patchelf --make-rpath-relative
call only works if the RPATH points to the ROOTDIR passed as
argument, and this ROOTDIR is the global host directory. Rewriting
the RPATH to not point to per-package host directories prior to
this is an easy solution to this issue.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
genimage makes a full copy of the given rootpath to ${GENIMAGE_TMP}/root
so passing TARGET_DIR would be a waste of time and disk space. We don't
rely on genimage to build the rootfs image, just to insert a pre-built
one in the disk image.
Signed-off-by: Carlos Santos <unixmania@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Back a few years ago, when we were starting to think about top-level
parallel build, we were not sure how to deal with packages that
installed the same files, so we wanted to catch the situation to assess
how prevalent that was, before we decided what to do and how to address
it.
However, the trend nowadays is that packages will install in a
per-package target/ (and staging/ and host/), and the final directories
will be assembled in a reproducible (alphabetical) order, so if two
packages install the same file, the last one will win (as is currently
the case).
Besides, check-uniq-files reports loads of spurious errors when packages
get reinstalled (e.g. during development).
Finally, check-uniq-files is the only script called during the build,
that is written in python.
So, get rid of check-uniq-files.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When selected, host-ccache is a dependency of almost all packages.
As such, it clutters the dependency graph uselessly.
Signed-off-by: Francois Perrad <francois.perrad@gadz.org>
Reviewed-by: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The POSIX specification defines a 'trap <action> EXIT' mechanism that is
useful to perform clean-up actions in shell scripts. A trap has two main
advantages over hand-crafted clean-up mechanisms:
- It runs even if the process is terminated by a SIGTERM.
- It runs even if the script stops due to a pipeline failure (set -e).
Now we can make the script to stop immediately if a compilation error
occurs, instead of letting it try to run an unexisting program.
This change may appear to be overkill but Buildroot is an open source
project and each piece of code is a potential learning tool for other
developments. We must strive to provide good examples.
Signed-off-by: Carlos Santos <unixmania@gmail.com>
Acked-by: Yann E. MORIN <yann.morin@orange.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Some installations mount /tmp with the 'noexec' option, which prevents
running the program generated there to check the kernel headers.
Avoid the problem by generating the program under $(BUILD_DIR), passed
as the first argument to check-kernel-headers.sh.
We could globally export a TMPDIR environment variable with some path
under $(BUILD_DIR) but such solution would be too intrusive, depriving
the user from the freedom to set TMPDIR at his will (or needs).
Fixes: https://bugs.busybox.net/show_bug.cgi?id=12241
Signed-off-by: Carlos Santos <unixmania@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
As suggested by Baruch Siach, using "git rev-parse HEAD" is a lot
simpler than playing around with "git log" to just retrieve the commit
id corresponding to the current HEAD.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
pkg-stats extracts the Buildroot commit id from which the package
information was collected. However, when doing so, it always assumes
we're using the master branch, by running "git log master".
But in fact, pkg-stats can be run from any branch/tag, so it makes a
lot more sense to use "git log HEAD".
Cc: victor.huesca@bootlin.com
Cc: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Now that we can order packages from biggest to smallest, it makes sense
to assign the most aggressive colours to the biggest packages.
As such, reorder the current colours so that we have, in order:
- red-ish
- orange-ish
- yellow-ish
- purple-ish
- eggplant-ish (is that even a colour? :-] )
- some-indeterminate-blue-ish
- dark-green-ish
- light-green-ish
For the previous, smallest-first ordering, it does not matter much what
the ordering is: the actual colours are still somewhat-unpredictably
assigned to packages, depending on the cut-off limit...
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Currently, the packages are sorted smallest first, and biggest last
(with unknown and others second-to-last and last, resp.).
Add an option to invert the ordering (but keeping unknown and others at
their current positions).
This has the nice side effect that we can now control the colours
assigned to the biggest package(s), as the colours are cycled from the
first to the last. Currently, the biggest packages gets a redish colour,
which is appropriate, but the second gets a greenish one, which is not
as appropriate (but changing that can come later).
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
When dealing with embedded devices, storage is more often than not some
kind of flash device, on which the memory is usually counted as powers
of 1024 instead of powers of 1000. As such, people may prefer reports
using IEC prefixes [0] instead of the SI prefixes.
Add an option to that effect.
We use argparse's ability to use custom actions [1] [2], to provide a
set of options that act on a boolean, but has a single help entry and
internally ensures consistency of the settings. We could have been using
the more conventional store_true/store_false actions instead, but that
would have meant either two help entries, one for each set of options,
and/or some logic after parse_args() to check the validity of the
settings.
[0] https://en.wikipedia.org/wiki/Binary_prefix
[1] https://docs.python.org/2/library/argparse.html#action
[2] https://docs.python.org/2/library/argparse.html#argparse.Action
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Currently, we group packages that contribute less then 1%, into the
"Other" category.
However, in some cases, there can be a lot of very comparatively small
packages, and they may not exceed this limit, and so only the "Others"
category would be displayed, which is not nice.
Conversely, if there are a lot of packages, most of which only so
slightly exceeding this limit, then we get all of them in the graph,
which is not nice either.
Add a way for the developers to pass a different cut-off limit. As for
the dependency graph which has BR2_GRAPH_DEPS_OPTS, add the environment
variable BR2_GRAPH_SIZE_OPTS to carry those extra option (in preparation
for more to come, later).
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
[Arnout:
- remove empty base class definition from Config;
- use parser.error instead of ValueError for invalid argument.]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Currently, we forcibly report sizes in multiple of Kilobytes. In some
big configurations, the sizes of the system as a whole, as well as that
of individual packages, may exceed megabytes, and when some artistic
assets get used, even the gigabyte may get exceed.
These big sizes are not easy to read when expressed in kilobytes.
Additionally, some very small packages might have sizes below the
kilobyte (and when we can specify the cut-off grouping size, they may
get reported), and thus the size displayed for those would be 0 kB.
Add a helper function that can format a floating-point size into a
string with all the appropriate formatting:
- there are at least 3 meaningfull digits visible, i.e. we display
"3.14" or "10.4" instead of just "3" or "10", but for big number we
don't care about too many precision either, so we report "100" or
"1000", not "100.42" or "1000.27";
- the proper SI prefix is appended, if needed.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Currently, the "unknown" category may be reported anywhere, so it does
not really stand out when there are a lot of packages in the graph.
Move it towards the end, but right before the "other" category, so that
it is a bit more visible. Like for Others, don't report it if its size
is zero.
Also, make it title case (i.e. "Unknown" instead of "unknown").
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
It is nicer overall to have a main() function, like all our other
scripts tend to have too.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
There are three E501 warnings returned by flake8, when run locally,
because we enforce a local 80-char limit, but that are not reported by
the gitlab-ci jobs because only a 132-char limit is required there.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Similar to toolchains and jpeg, we now offer a way for br2-external
trees to provide their openssl implementation, which gets included in
the openssl choice.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Similar to toolchains, we now offer a way for br2-external trees to
provide their libjpeg implementation, which gets included in the jpeg
choice.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Since we have a choice for the pre-configured pre-built toolchains,
there is no possbility for a br2-external to provide its own. The
only solution so far for defconfigs in br2-external trees is to use
BR2_TOOLCHAIN_EXTERNAL_CUSTOM and define all the bits by itself...
This is not so convemient, so offer a way for br2-external trees to
provide such pre-configured toolchains.
To allow for this, we now scan each br2-external tree and look for a
specific file, provides.toolchains.in. We generate a kconfig file that
sources each such file, and that generated file is sourced from within
the toolchain choice, thus making the toolchains from a br2-external
tree possible and available in the same location as the ones known to
Buildroot:
Toolchain --->
Toolchain type (External toolchain) --->
Toolchain --->
(X) Arm ARM 2019.03
( ) Linaro ARM 2018.05
( ) Custom toolchain
*** Toolchains from my-br2-ext-tree: ***
( ) My custom ARM toolchain
*** Toolchains from another-br2-ext-tree: ***
( ) Another custom ARM toolchain
( ) A third custom ARM toolchain
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently, the kconfig part contains two things: the kconfig option
with the paths to br2-external trees, and the kconfig menus for the
br2-external trees.
When we want to include more kconfig files from the br2-external tree
(e.g. to get definitions for pre-built toolchains), we will need to
have the paths defined earlier, so they can be used from the br2-external
tree to include files earlier than the existing menus.
Split the generated kconfig file in two: one to define the paths, which
gets included early in our main Config.in, and one to actually define
the existing menus, which still gets included at the same place they
currently are.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
We currently redirect the output of each helper function. This was nice
as long as we were generating single .mk and .in fragments.
But we are soon to need more .in fragments.
So, do the redirection inside the .in helpers.
We do not (currently) need to generate more than one .mk fragment, but
for consistency, do the redirection in the .mk helper too.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When we introduced support for multiple br2-external trees, we
introduced two files, one on the Makefile side, needed very early,
and one on the kconfig side, needed later in the configuration
process. We naturally introduced a two-step generation, as it looked
like the simplest and most obvious way.
But now, we are on the verge of generating more files on the kconfig
side, and it does not make sense to add even more steps to generate
them.
And even better yet, we can generate both the Makefile-side and
kconfig-side files at the same time, in fact.
Make it so.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
We do not usually provide help for our internal scripts. Besides, such
help has a tendency to bitrot pretty quickly anyway.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Commit b14b02698 (core/br2-external: restore compatibility with old
distros) switched to using 'eval' to emulate associative arrays, for
those distros too old to have bash-4+.
In so doing, it forgot to declare the new local variables in the
respective helper functions.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Vadim Kochan <vadim4j@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The major bottleneck in pkg-stats is the time spent waiting for
answers from remote servers. Two functions involve such communication
with remote servers:
- 'check_package_urls' which checks that each package upstream website
is up, it is efficient due to the use of process-pools thanks to
Matt Weber.
- 'check_package_latest_version' which fetches the latest package
version from release-monitoring, it uses a http-pool but runs
sequentially.
This patch extends the use of process-pools to 'check_latest_version'.
Due to some limitations of multiprocess callbacks, this patch loses
the overall progress of packages in favour of just the current package
name.
Runtimes for this function are ~3m vs ~25m for the linear version.
Tested on an i7 7500U (2/4 cores/threads @3.5GHz) with 15ms ping.
Note: There have already been work trying to parallelize this function
using threads but there were a failure on some configurations [1].
This implementation rely on a dedicated module already in use on this
script, so it's unlikely to see failure with this version.
[1] http://lists.busybox.net/pipermail/buildroot/2018-March/215368.html
Signed-off-by: Victor Huesca <victor.huesca@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Fixes:
- blank space before ':'
- unused 'o' variable left from a previous patch
- bad continuous alignment
Signed-off-by: Victor Huesca <victor.huesca@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The pkg-stats calls 3 times `make` to get a bunch of variables. These
variables can be obtained in only one make invocation. This patch
replaces the three calls by just one and adjusts the parsing logic
accordingly.
Note: another option suggested by Arnout would be to run `make
show-info` that produces a json with the necessary variables. This
would avoid the duplicated effort done in pkg-stats and pkg-utils and
allow to add other infos to pkg-stats like dependencies, reversed
dependencies or if the package is virtual.
In order to use this method, the following changes are required in
pkg-generic's show-info:
- include license_files;
- have an option to run it on *all* packages, not just the selected
ones.
This patch take the simplest approach of only factorizing the make
calls as it requires less changes.
Signed-off-by: Victor Huesca <victor.huesca@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Since it's used only for the HTML output, and all other functions used
for HTML output are prefixed by dump_html, let's do so for
dump_gen_info() as well by renaming it to dump_html_gen_info().
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The 'dump_html' and 'dump_json' both include commit infos as well as the
current date. It make more sense to retrieve these information once.
This patch simply does this factorization.
Signed-off-by: Victor Huesca <victor.huesca@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Pkg-stats is a great script that get a lot of interesting info from
buildroot packages. Unfortunately it is currently designed to output a
static HTML page only. While this is great to include on the
buildroot's website, the HTML is not designed to be easily parsable and
thus it is difficult to reuse it in other scripts.
This patch provide a new option to output a JSON file in addition to the
HTML one.
The old 'output' option has been renamed to 'html' to distinguish from
the new 'json' option.
Signed-off-by: Victor Huesca <victor.huesca@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Move the mutual exculsion of the '-n' and '-p' options to be part of the
parser instead of being checked in main.
Signed-off-by: Victor Huesca <victor.huesca@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
/lib/grub is already ignored, so add /usr/lib/grub to support
BR2_ROOTFS_MERGED_USR.
Signed-off-by: Alex Xu <alex_y_xu@yahoo.ca>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Currently, we extract the dependency graph from the aptly named but
ad-hoc show-dependency-graph rule.
We now have a better solution to report package information, with
show-info.
Since show-dependency-graph never went into a release so far, and
show-info does provide the same (and more), switch to using show-info.
Thanks to Adam for suggesting the coding style to have a readable code
that is not ugly but still pleases flake8. Thanks to Arnout for
suggesting the use of dict.get() to further simplify the code.
Note: we do not use the reverse_dependencies field because it only
contains those packages that have a kconfig option, so we'd miss most
host packages.
Reported-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Cc: Adam Duskett <aduskett@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Previously, the flake8 script didn't help us to detect when Python
scripts were incorrectly wrapped. Now, however, it does report such
errors.
Fix one such an error now.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
[Arnout: give commit message a more positive tone]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Now that we can get the whole dependency tree from make, use it to
speed up things considerably.
So far, we had three functions to get the dependencies information:
get_depends(), get_rdepends(), and, somehow unrelated, get_version().
Because of the way %-show-{,r}depends works, getting the dependency tree
was expensive, the three functions all took a set of packages for which
to get the dependencies, in an attempt to limit the time it took to get
that tree, but we still had to call these functions iteratively, until
they returned no new dependency. This was pretty costly.
Now, getting the tree is much, much less costly, and we can get the
whole tree as cheaply as we previously got only the first-level
dependencies.
Furthermore, we can now also get the version information at the same
time, and that also brings in whether the package is virtual or not,
target or host.
So, we drop all three helper functions, and replace them with a single
one that returns all that information in one go: full dependency trees
(direct and reverse), per-package type, and per-package version.
Note: since commit 2d29fd96a (pkg-virtual: remove VERSION/SOURCE),
virtual packages are no longer reported as having a 'virtual' version,
so have since been displayed as regular packages in the graphs. Although
noone complained, this patch incidentally restores the initial
behaviour, and virtual packages are now correctly displayed as such
again.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
We we simplify the dependency graph, we try to remove so-called
mandatory dependencies from each package, and for each mandatory that
was thus removed, reattach it to the root-package of the graph.
This was made so that mandatory dependencies (which are dependencies of
all packages, or at least of a lot of packages) do not clutter the
dependency graph, but that they are still shown in the graph, as
dependencies of the root package.
However, these mandatory dependencies are only _direct_ dependencies.
As such, it does not make sense to reattach a mandatory dependency when
doing a reverse graph. Worse, it can actually be incorrect.
For example, 'skeleton' is a mandatory dependency, and as such is
removed from all packages. But when doing a reverse graph, skeleton is
now in the dependency chain of, e.g. skeleton-init-none; it should then
not be removed.
In short: the notion of mandatory dependencies does not make sense in
the case of a reverse graph.
Consequently, skip over the mandatory dependency removal when doing a
reverse graph.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When host-gzip is needed, it is a mandatory dependency of all packages.
As such, drawing the dependency lines toward host-gzip would uselessly
clutter the graph.
So, like for the skeleton, host-skeleton, and host-tar, we cut the
dependency chains toward host-gzip.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When host-tar is needed, it is a mandatory dependency of all packages.
As such, drawing the dependency lines toward host-tar would uselessly
clutter the graph.
So, like for the skeleton and host-skeleton, we cut the dependency chains
toward host-tar.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
host-skeleton is a dependency of almost all packages, except a very few.
As such, it clutters the dependency graph uselessly.
Do with it as we do for the skeleton: cut the dependency chains.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Some times, multiple dependency graphs for a set of packages (mostly
the application-level packages for the project) are included in reports
(e.g. delivery notes). Repeating the mandatory dependencies on all
those graphs is useless and clutters the important dependencies.
When we had only two such mandatory dependencies (toolchain, skeleton),
it was manageable to list them as manual exclusions:
-x toolchain -x skeleton
But we now have quite a few such dependencies, and it becomes a bit more
cumbersome to manage, not counting the ones we may add in the future.
Add an option to exclude all those mandatory dependencies, to generate
neat graphs.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The current graph-depends implementation filters out a number of
"mandatory" dependencies that all packages have: dependency on
"toolchain" and dependency on "skeleton".
Despite this filtering, in full graph dependencies, "toolchain" and
"skeleton" are still shown, because they are target packages, and
therefore appear in the result of "make show-targets". Thanks to this,
they will be visible as dependencies of the "ALL" node, which is the
root of the dependency tree.
However, as we are going to introduce host-skeleton as a "mandatory
dependency" to be filtered out, this is no longer going to work.
This commit adjusts the remove_extra_deps() function to ensure that
when a mandatory dependency is removed, this dependency exists between
the root of the dependency tree and the mandatory dependency.
This issue was noticed by Yann E. Morin, and this commit provides a
different implementation than what Yann proposed in
https://patchwork.ozlabs.org/patch/910453/.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
[yann.morin.1998@free.fr:
- list mandatory deps before removing them
- fix flake8 warnings
]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Fixes the following flake8 warnings:
support/scripts/pkg-stats:34:2: W605 invalid escape sequence '\$'
support/scripts/pkg-stats:34:4: W605 invalid escape sequence '\('
support/scripts/pkg-stats:34:11: W605 invalid escape sequence '\$'
support/scripts/pkg-stats:34:13: W605 invalid escape sequence '\('
support/scripts/pkg-stats:34:32: W605 invalid escape sequence '\)'
support/scripts/pkg-stats:34:34: W605 invalid escape sequence '\)'
support/scripts/pkg-stats:35:2: W605 invalid escape sequence '\s'
support/scripts/pkg-stats:35:14: W605 invalid escape sequence '\S'
support/scripts/pkg-stats:35:17: W605 invalid escape sequence '\s'
support/scripts/pkg-stats:42:1: E302 expected 2 blank lines, found 1
support/scripts/pkg-stats:587:133: E501 line too long (157 > 132 characters)
Note that the "invalid escape sequence" errors work because Python
leaves the \ in place if it doesn't recognise the escape sequence. But
it's better practice to use a raw string for regular expressions.
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Introduce support/scripts/check-merged-usr.sh, a script that check if a
given path complies to the merged /usr requirements:
/
/bin -> usr/bin
/lib -> usr/lib
/sbin -> usr/sbin
/usr/bin/
/usr/lib/
/usr/sbin/
Use this script in skeleton-custom.mk instead of a bunch of variables
filled by $(shell ...) macros. The same script will be used to check
rootfs overlays, in a forthcoming change.
Signed-off-by: Carlos Santos <casantos@datacom.ind.br>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
This commit adds fetching the latest upstream version of each package
from release-monitoring.org.
The fetching process first tries to use the package mappings of the
"Buildroot" distribution [1]. This mapping mechanism allows to tell
release-monitoring.org what is the name of a package in a given
distribution/build-system. For example, the package xutil_util-macros
in Buildroot is named xorg-util-macros on release-monitoring.org. This
mapping can be seen in the section "Mappings" of
https://release-monitoring.org/project/15037/.
If there is no mapping, then it does a regular search, and within the
search results, looks for a package whose name matches the Buildroot
name.
Even though fetching from release-monitoring.org is a bit slow, using
multiprocessing.Pool has proven to not be reliable, with some requests
ending up with an exception. So we keep a serialized approach, but
with a single HTTPSConnectionPool() for all queries. Long term, we
hope to be able to use a database dump of release-monitoring.org
instead.
From an output point of view, the latest version column:
- Is green when the version in Buildroot matches the latest upstream
version
- Is orange when the latest upstream version is unknown because the
package was not found on release-monitoring.org
- Is red when the version in Buildroot doesn't match the latest
upstream version. Note that we are not doing anything smart here:
we are just testing if the strings are equal or not.
- The cell contains the link to the project on release-monitoring.org
if found.
- The cell indicates if the match was done using a distro mapping, or
through a regular search.
[1] https://release-monitoring.org/distro/Buildroot/
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Replace all YAML anchors with the new "extends" keyword because it is
more readable and more flexible (it works across configuration files
combined with the new "include" keyword).
Readability is more meaningful in .gitlab-ci.yml.in.
In the part of .gitlab-ci.yml that is auto-generated by 'make
.gitlab-ci.yml' keep the keyword in the same line of the job name.
So instead of this:
zynqmp_zcu106_defconfig:
extends: .defconfig
tests.boot.test_atf.TestATFAllwinner:
extends: .runtime_test
Use this:
zynqmp_zcu106_defconfig: { extends: .defconfig }
tests.boot.test_atf.TestATFAllwinner: { extends: .runtime_test }
Do this to to keep .gitlab-ci.yml easier to be post-processed by a
script.
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
setlocalversion will use 'hg id' to determine whether or not the current
revision is tagged. If there is no tag, the Mercurial revision is printed,
otherwise nothing is printed.
The problem is that the user may have custom configuration settings (in
their ~/.hgrc file or similar) that changes the output of 'hg id' in a way
that the script does not expect. In such cases, the Mercurial revision may
not be printed or printed incorrectly.
It is good practice to ignore the user environment when calling Mercurial
commands from a well-defined script, by setting the environment variable
HGRCPATH to the empty string. See also 'hg help environment'.
In the particular case of Nokia, a custom extension adds dynamic tags in the
repository, i.e. tags that are stored in a file external to the repository
and only visible when the extension is active. These tags should not
influence the behavior of setlocalversion as they are not official Buildroot
tags, i.e. even if a revision is tagged, the Mercurial revision should still
be printed.
Note that this still does not solve the problem where an organization adds
_real_ tags in their Buildroot repository. For example, there might be a
moving tag 'last-validated' or tags indicating in which product release that
Buildroot revision was used. In these cases, setlocalversion will still not
behave as expected, i.e. show the Mercurial revision.
Signed-off-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When Buildroot is stored in a Mercurial repository on a branch other than
'default' ('master' in git terms), setlocalversion (used to populate
/etc/os-release) will incorrectly think that this is a tagged version and
will NOT print out the revision hash.
This is due to the fact that the output of 'hg id' is assumed to be
"<revision> <tags-if-any>"
but when on a branch it actually is:
"<revision> (<branch>) <tags-if-any>"
To let setlocalversion receive the output it expects, explicitly ask 'hg id'
to retrieve only the revision hash and any tags, ommitting any branch
information.
Signed-off-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The color for 'extract' is very similar to the one for 'install-images'.
Both are cyan-like.
Replace the former by a pale blue to make all colors sufficiently distinct.
Signed-off-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Total build time also involves download. Getting a visibility on the impact
of that step can be important for users/admins, e.g. to evaluate different
methods of BR2_PRIMARY_SITE.
Colors used are some kind of purple (primary scheme) and light orange
(alternate scheme).
Signed-off-by: Mathias De Maré <mathias.de_mare@nokia.com>
[ThomasDS: rebase and update colors to avoid confusion]
Signed-off-by: Thomas De Schampheleire <thomas.de_schampheleire@nokia.com>
Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
As suggested by Arnout Vandecappelle, let's document the
elf_needs_rpath() and check_elf_has_rpath() functions, before we make
them a bit more complicated with per-package directory support.
Suggested-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
GitLab has severe limitations imposed to triggers.
Using a variable in a regexp is not allowed:
| only:
| - /-$CI_JOB_NAME$/
| - /-\$CI_JOB_NAME$/
| - /-%CI_JOB_NAME%$/
Using the key 'variables' always lead to an AND with 'refs', so:
| only:
| refs:
| - branches
| - tags
| variables:
| - $CI_JOB_NAME == $CI_COMMIT_REF_NAME
would make the push of a tag not to trigger all jobs anymore.
Inheritance is used only for the second level of keys, so:
|.runtime_test: &runtime_test
| only:
| - tags
|tests.package.test_python_txaio.TestPythonPy2Txaio:
| <<: *runtime_test
| only:
| - /-TestPythonPy2Txaio$/
would override the entire key 'only', making the push of a tag not to
trigger all jobs anymore.
So, in order to have a trigger per job and still allow the push of a tag
to trigger all jobs (all this in a follow up patch), the regexp for each
job must be hardcoded in the .gitlab-ci.yml and also the inherited
values for key 'only' must be repeated for every job.
This is not a big issue, .gitlab-ci.yml is already automatically
generated from a template and there will be no need to hand-editing it
when jobs are added or removed.
Since the logic to generate the yaml file from the template will become
more complex, move the commands from the main Makefile to a script.
Using Python or other advanced scripting language for that script would
be the most versatile solution, but that would bring another dependency
on the host machine, pyyaml if Python is used. So every developer that
needs to run 'make .gitlab-ci.yml' and also the docker image used in the
GitLab pipelines would need to have pyyaml pre-installed.
Instead of adding the mentioned dependency, keep using a bash script.
While moving the commands to the script:
- mimic the behavior of the previous make target and fail on any
command that fails, by using 'set -e';
- break the original lines in one command per line, making the diff for
any patch to be applied to this file to look nicer;
- keep the script as simple as possible, without functions, just a
script that executes from the top to bottom;
- do not perform validations on the input parameters, any command that
fails already makes the script to fail;
- do not add an usage message, the script is not intended to be called
directly.
This patch does not change functionality.
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
[Thomas: make the script output on stdout rather than take the output
file name as second argument.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This array will be re-used in another function in a follow-up commit,
so it makes sense to factor it out.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The remove_extra_deps() function removes dependencies that we are not
interested in seeing in the dependency graph. It does this for all
packages, except the 'all' package, which on full dependency graphs is
the root of the tree.
However, this doesn't take into account package-specific dependency
graphs (i.e make <pkg>-graph-depends) where the root is not 'all', but
'<pkg>'. Due to this, dependencies on "mandatory deps" were not
visible at all, i.e the toolchain package (and its dependencies) and
the skeleton package (and its dependencies) were not displayed in
package-specific dependency graphs.
To fix this, we use the existing rootpkg variable instead of
hardcoding 'all'.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Currently, we avoid drawing the dependencies that we call 'target
exceptions', becasue they initially were returned by 'show-targets',
when they in fact were not really packages and thus should not be on
the graph.
However, those two exceptions have no longer been reported in the output
of show-targets since we merged very old initial top-level parallel
build way back in 2014, with commit a24877586a (Makefile: add support
for top-level parallel make), where they had been converted into purely
internal rules.
4 years have passed, we can now drop those exceptions from the
graph-depends script.
This concludes the cleanup initiated three years ago with commit
0b32791f00 (graph-depends: remove absent targets from
TARGET_EXCEPTIONS).
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Add an option to install grub2 support tools to the target.
In the context of Buildroot, some useful target tools provided are
grub2-editenv, grub2-reboot, which provide means to manage the grub2,
environment, boot order, and others.
Signed-off-by: Erico Nunes <nunes.erico@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Inside the check_elf_has_rpath(), we check if the host binary has a
correct RPATH, which should be either an absolute path to
$(HOST_DIR)/lib, or a relative path using $ORIGIN. Those two
conditions are checked in a single statements, but as we are going to
add a third condition, let's split this up a bit:
- If we have a RPATH to $(HOST_DIR)/lib -> we're good, return 0
- If we have a RPATH to $ORIGIN/../lib -> we're good, return 0
- Otherwise, we will exit the loop, and return 1
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Graphviz' dot utility does not like nodes which names does not start
with an ^[[:alpha:]], i.e. 18xx-ti-utils would cause grievance:
Warning: syntax ambiguity - badly delimited number '18x' in line 4 [...]/graph-depends.dot splits into two tokens
Warning: syntax ambiguity - badly delimited number '18x' in line 5 [...]/graph-depends.dot splits into two tokens
Warning: syntax ambiguity - badly delimited number '18x' in line 6 [...]/graph-depends.dot splits into two tokens
Warning: syntax ambiguity - badly delimited number '18x' in line 7 [...]/graph-depends.dot splits into two tokens
Prefix nodes with an underscore to fix that.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Suppose we use Makefile wrapper and build some project out of
buildroot tree (O=...). A command like "make
busybox-all-external-deps" will output the string "uname 022 && make
..." to stdout before the usefull information. It pollutes stdout. At
the same time if we use the same command in the buildroot source-tree
then we don't get the additional output. This patch makes wrapper
silent by default. People who prefer to see more verbose output can
use V=1.
Signed-off-by: Serj Kalichev <serj.kalichev@gmail.com>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Currently, the timestamps that we keep in build-time.log use a
second-level precision. However, as we are going to introduce a new
type of graph to draw the time line of a build, this precision is
going to be insufficient, as a number of steps are so short that they
are not even one second long, and generally the rounding to the second
gives a not so great looking graph.
Therefore, we add to the timestamps the nanoseconds using the %N date
specifier. A milli-second precision would have been sufficient, but %N
is all what date(1) provides at the sub-second level.
Since this is changing the format of the build-time.log file, this
commit adjusts the support/scripts/graph-build-time script
accordingly, to account for the floating point numbers that we have as
timestamps.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Adds a pool of worker threads to accelerate connection testing.
~7.5MB and 2% CPU per thread on a Intel i5-3230M CPU @ 2.60GHz.
Runtime is ~3min in parallel vs ~15min.
CC: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Matthew Weber <matthew.weber@rockwellcollins.com>
Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>