Currently, we expect and only use hash files that lie within the package
directory, alongside the .mk file. Those hash files are thus bundled
with Buildroot.
This implies that only what's known to Buildroot can ever get into those
hash files. For packages where the version is fixed (or a static
choice), then we can carry hashes for those known versions.
However, we do have a few packages for which the version is a free-form
entry, where the user can provide a custom location and/or version. like
a custom VCS tree and revision, or a custom tarball URL. This means that
Buildroot has no way to be able to cary hashes for such custom versions.
This means that there is no integrity check that what was downloaded is
what was expected. For a sha1 in a git tree, this is a minor issue,
because the sha1 by itself is already a hash of the expected content.
But for custom tarballs URLs, or for a tag in a VCS, there is indeed no
integrity check.
Buildroot can't provide such hashes, but interested users may want to
provide those, and currently there is no (easy) way to do so.
So, we need our download helpers to be able to accept more than one hash
file to lookup for hashes.
Extend the dl-wrapper and the check-hash helpers thusly, and update the
legal-info accordingly.
Note that, to be able to pass more than one hash file, we also need to
re-order the arguments passed to support/download/check-hash, which also
impies some shuffling in the three places it is called:
- 2 in dl-wrapper
- 1 in the legal-info infra
That in turn also requires that the legal-license-file macro args get
re-ordered to have the hash file last; we take the opportunity to also
move the HOST/TARGET arg to be first, like in the other legal-info
macros.
Reported-by: "Martin Zeiser (mzeiser)" <mzeiser@cisco.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In 6fa3a239 the gen-missing-cpe support script was removed together with
"make missing-cpe".
Remove the leftover path variable and drop it from "make clean".
Signed-off-by: Daniel Lang <dalang@gmx.at>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
NodeJS 20 requires gcc >= 10.1. Unfortunately, except for
BR2_HOST_GCC_AT_LEAST_4_9, Buildroot only handles host gcc version
with the granularity of the major release, so we will have to round up
to GCC >= 11 for NodeJS 20.
Signed-off-by: Adam Duskett <adam.duskett@amarulasolutions.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
commit 21d52e52d8 (package/pkg-utils.mk: break hardlinks in global
{TARGET, HOST}_DIR on per-package build) was recently reverted, so we
are back to a situation where it is possible for packages and post-build
scripts to modify files in-place, and thus impact files in any arbitrary
per-package directory, which may break things on rebuild for example.
21d52e52d8 was too big a hammer, but we can still apply the reasoning
from it, to the aggregation of the final target and host directories.
This solves the case for post-build scripts at least. We leave the case
of inter-package modification aside, as it is a bigger issue that will
need more than just copying files around.
We use --hard-links, so that hard-links in the source (the PPD), are
kept as new hard-links (i.e. "copy" of hard-links) in the destination.
This contributes to limiting the size of target/.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Herve Codina <herve.codina@bootlin.com>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reviewed-by: Herve Codina <herve.codina@bootlin.com>
The relocate-sdk.sh script does not work correctly when
BR2_PER_PACKAGE_DIRECTORIES is enabled. relocate-sdk.sh expects
everything to point at $HOST_DIR, but each package will be pointing at
its $(O)/per-package/*/host.
Use the same command for scrubing host paths during the build, to scrub
to the final host directory location.
Signed-off-by: Brandon Maier <Brandon.Maier@collins.com>
Acked-by: Charles Hardin <ckhardin@gmail.com>
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
The reinstall, rebuild and reconfigure commands rely on the
left-to-right order of evaluation of the dependencies to make sure that
the stamp files are removed before attempting to rebuild. However, this
order of evaluation is not guaranteed. In particular, if top-level
parallel build is enabled, they are executed in parallel and the stamp
file may not have been removed yet when it is evaluated to decide if
rebuild has to be done.
Since make 4.4, it is possible to reproduce this issue by passing
`--shuffle=reverse` to the make commandline.
To solve this, add a .WAIT directive between the clean and
install/build/configure dependencies. .WAIT was introduced in make 4.4
as well. It makes sure that the dependencies on the left are evaluated
before the dependencies on the right - exactly what we want here.
Earlier versions of make don't know about .WAIT, so we need to add a
.PHONY dependency to effectively ignore it.
Note that this doesn't fix the problem for make versions earlier than
4.4. However, the issue isn't really that important: reinstall, rebuild
and reconfigure are development tools, they're not fully reliable to
begin with, and it's anyway less likely that someone uses `make -j` when
doing a reinstall/rebuild/reconfigure.
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
Reported-by: James Hilliard <james.hilliard1@gmail.com>
The intention of this script is to generate the XML that can be sent to
NVD to request a new CPE identifier.
As discussed on the mailing list [0] keeping up with version numbers of
all registered CPE ID won't work.
In addition the feed used to generated the XML files will be retired
[1]. In the future an API needs to be used for fetching the data in
connection with a local database.
All of this works against keeping this script and porting it to the new
API.
As a last blow Matthew, the original author concluded [2]:
> Makes sense to drop it. There never got to be enough momentum in the overall
> software community to make CVE or even the new identifier really accurate.
The intention is to ignore the version part of CPE IDs in the future,
and only look at the version range specified on a CVE. Therefore, a tool
to add new CPE ID versions isn't useful to us. It might still be useful
to have a tool to create the vendor and project parts of a CPE ID.
However, the current gen-missing-cpe tool doesn't support that, and the
API is anyway going to be retired. So there is no reason at all to keep
this around.
Remove gen-missing-cpe and the cpedb module. Remove the Makefile target
to call the script.
Since the cpedb module is removed, the CPEDB_URL definition must be
moved to the place where it is still used, in pkg-stats.
[0]: https://lists.buildroot.org/pipermail/buildroot/2023-August/672620.html
[1]: https://nvd.nist.gov/General/News/change-timeline
[2]: https://lists.buildroot.org/pipermail/buildroot/2023-August/672651.html
Signed-off-by: Daniel Lang <dalang@gmx.at>
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
Using "xargs" instead of "while read" loop allows for the patching of
files to be parallelized. This significantly reduces the amount of
time it takes to fix all the paths. On a larger RFS(~300MB) this
script was taking 5 minutes, it now only takes about 30s on a 12 core
machine.
Signed-off-by: Victor Dumas <dumasv.dev@gmail.com>
[Thomas: take into account the suggestion of Quentin Schulz to pass
PARALLEL_JOBS through the environment down to the fix-rpath script]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Teach check-package to detect python files by type and check them using
flake8.
Do not use subprocess to call 'python3 -m flake8' in order to avoid too
many spawned shells, which in its turn would slow down the check for
multiple files. (make check-package takes twice the time using a shell
for each flake8 call, when compared of importing the main application)
Expand the runtime test and the unit tests for check-package.
Remove check-flake8 from the makefile and also from the GitLab CI
because the exact same checks become part of check-package.
Suggested-by: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
[Arnout: add a comment to x-python to explain its purpose]
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
... just like check-flake8 already does.
When a new check_function is added to check-package, often there are
files in the tree that would generate warnings.
An example is the Sob check_function for patch files:
| $ ./utils/check-package --i Sob $(git ls-files) >/dev/null
| 369301 lines processed
| 46 warnings generated
Currently these warnings are listed when calling check-package directly,
and also at the output of pkg-stats, but the check_function does not run
on 'make check-package' (that is used to catch regressions on GitLab CI
'check-package' job) until all warnings in the tree are fixed.
This (theoretically) allows new .patch files be added without SoB,
without the GitLab CI catching it.
Since now check-package has an ignore file to list all warnings in the
tree, that will eventually be fixed, there is no need to filter the
files passed to check-package.
So test all files in the tree when 'make check-package' is called.
It brings following advantages;
- any new check_function added to check-package takes place immediately
for new files;
- adding new check_functions is less traumatic to the developer doing
this, since he/she does not need anymore to fix all warnings in the
tree before the new check_function takes effect;
- prevent regressions, e.g. ANY new .patch file must have SoB;
- as a side-effect, print a single statistics line as output of
'make ckeck-package'.
But just enabling the check would generate many warnings when
'make check-package' is called, so update the ignore file by using:
$ ./utils/docker-run make .checkpackageignore
Notice: in order to ensure reproducible results, one should run 'make
check-package' and 'make .checkpackageignore' inside the docker image,
otherwise a variation in shellcheck version (installed in the host) can
produce different results.
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When a developer fixes an ignored warning from check-package, he/she
needs to update .checkpackageignore
By running './utils/docker-run make check-package' the developer
receives a warning about this.
Make that change easier to make, by adding a helper target on Makefile.
Add an option --failed-only to check-package that generates output in
the format:
<filename> <check_function> [<check_function> ...]
This is the very same format used by check-package ignore file.
Add the phony target .checkpackageignore
So one can update the ignore file using:
$ ./utils/docker-run make .checkpackageignore
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Commit e6195c5304 (Makefile: fix use of many br2-external trees) fixed
a slowdown with many br2-external trees. In doing so, it changed the
type of the %_defconfig rule: the stem is no longer present in the
prerequisites, so it changes from a pattern rule to an implicit pattern
rule [0].
It is not unusual to name the build directory after the defconfig that
is being built, so we may end up with a build directory named
meh_defconfig. Before e6195c5304, the pattern rule would not match
[1], but now it does, which causes somewhat-cryptic build failures:
Makefile:1015: *** "Can't find /some/path/meh_defconfig". Stop.
The issue is that we have this set of rules and assignments (elided and
reordered for legibility):
all: world
world: target-post-image
target-post-image: staging-finalize
staging-finalize: $(STAGING_DIR_SYMLINK)
$(STAGING_DIR_SYMLINK): | $(BASE_DIR)
BASE_DIR := $(CANONICAL_O)
CANONICAL_O := $(shell mkdir -p $(O) >/dev/null 2>&1)$(realpath $(O))
So, there is a rule that (eventually) has a dependency on $(O), but we
have no rule that provides it explicitly, so the %_defconfig rule kicks
in, with the stem as "/some/path/meh". When the loop searches all the
".../configs/" directories for a file named ".../configs/%_defconfig",
it actually looks for a file named ".../configs//some/path/meh_defconfig"
and that indeed never matches anything.
The solution is to provide an actual rule for $(BASE_DIR), so that the
implicit rule does not kick in.
[0] Terminology and behaviour in make is hard, so the terms we used here
may be wrong or incorrectly used, and/or the explanations for the
behaviour be wrong or incomplete... Still, the reasoning stands, and
the root cause is the removal of the stem in the RHS of the rule
(adding one back does fix the issue).
[1] not sure how the prerequisite was solved before e6195c5304,
though...
Fixes: e6195c5304
Reported-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Nevo Hed <nhed+buildroot@starry.com>
Cc: Peter Korsgaard <peter@korsgaard.com>
Tested-by: Sebastian Weyer <sebastian.weyer@smile.fr>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Historically we have been (more-or-less consistently, sometimes forgetting
some files) updating the end year of the copyright statements at the
beginning of a new year.
We're naturally not alone in that. Recently this was discussed in curl, and
it turns out that copyright years are not really required:
https://daniel.haxx.se/blog/2023/01/08/copyright-without-years/
So drop the years and simplify the copyright statements. While we're at it,
also ensure the same syntax (capital C, email address) is used everywhere.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
The top level Makefile in buildroot has a recursive rule which causes
the appearance of a hang as the number of directories in BR2_EXTERNAL
increases. When the number of directories in BR2_EXTERNAL is small, the
recursion occurs, but make detects the recursion and determines the
target does not have to be remade. This allows make to progress.
This is the failing rule:
define percent_defconfig
# Override the BR2_DEFCONFIG from COMMON_CONFIG_ENV with the new defconfig
%_defconfig: $(BUILD_DIR)/buildroot-config/conf $(1)/configs/%_defconfig outputmakefile
@$$(COMMON_CONFIG_ENV) BR2_DEFCONFIG=$(1)/configs/$$@ \
$$< --defconfig=$(1)/configs/$$@ $$(CONFIG_CONFIG_IN)
endef
$(eval $(foreach d,$(call reverse,$(TOPDIR) $(BR2_EXTERNAL_DIRS)),$(call percent_defconfig,$(d))$(sep)))
The rule for %defconfig is created for each directory in BR2_EXTERNAL.
When the rule is matched, the stem is 'defconfig_name'. The second
prerequisite is expanded to $(1)/configs/defconfig_name_defconfig. The
rule, and all of the other rules defined by this macro, are invoked
again, but the stem is now $(1)/configs/defconfig_name_defconfig. The
second prerequisite is now expanded to
$(1)/configs/($1)/configs/defconfig_name_defconfig. This expansion
continues until make detects the infinite recursion.
With up to 5 br2-external trees, the time is very small, so that it is
not noticeable. But starting with 6 br2-external trees, the time is
insanely big (so much so that we did not even let it finish after it ran
for hours); see timings toward the end of the commit log.
We fix that by adding a single %_defconfig rule, which is now rsponsible
to find the actual defconfig file that triggered the rule, by iterating
on the reverse list of br2-external trees and then in main tree.
Of course, now, there is no way for make to warn that there is no such
defconfig, as it is no longer part of the prerequisites of the rule. So,
we delegate to the recipe the responsibility to check for that.
Timing (seconds) of `make pc_x86_64_bios_defconfig` with 1..1000
external trees, with make 4.2.1 (* with make 4.3), on a Core i7-7700HQ:
#trees Before After
1 0.312 0.319
2 0.319 0.323
3 0.325 0.327
4 0.353 0.339
5 0.993 0.349
6 1.26* 0.347
7 9.10* 0.362
8 85.93* 0.360
9 n/a 0.373
10 n/a 0.374
50 n/a 0.738
100 n/a 1.228
500 n/a 7.483
1000 n/a 16.076
How to reproduce:
#!/usr/bin/env bash
N="${1:-1000}"
for i in $(seq 1 1000); do
[ -d "br2-external/${i}/configs" ] && break
mkdir -p br2-external/${i}/configs
touch br2-external/${i}/{Config.in,external.mk}
echo "name: BR_TEST_${i}" >br2-external/${i}/external.desc
touch br2-external/${i}/configs/foo{,_${i}}_defconfig
done
time make \
BR2_EXTERNAL="$(
for i in $(seq 1 ${N}); do
printf '%s\n' "$(pwd)/br2-external/${i}"
done
)" \
foo_1_defconfig
Notes: the timings are very dependent on how much the CPU is otherwise
loaded, but having a multi-core CPU slightly loaded helps maintain a
high frequency on the siblings, and that can reduce the above timings
in half! Best to try on an otherwise-idle system.
Fixes: #14996
Reported-by: David Lawson <david.lawson1@tx.rr.com>
Signed-off-by: Nevo Hed <nhed+buildroot@starry.com>
[yann.morin.1998@free.fr:
- split long foreach
- drastically extend the commit log
- provide reproducer script and redo timings
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Compilation of Perl-related packages fails if `PERL_MM_OPT` is defined.
We previously issued an error in this case.
Instead, simply `unexport` the variable.
Signed-off-by: Gleb Mazovetskiy <glex.spb@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
To generate the glibc locale data, we call into a recursive Makefile,
so as to generate locales in parallel. This is done as part of a
target-finalize hook.
However, that hook is registered after all packages have been parsed,
and as such, it maye be registered after hooks defined in packages.
Furthermore, the expansion of target-finalize hooks is done in a recipe,
so it is not easy to understand whether this generates a "simple" rule
or not.
As a consequence, despite the use of $(MAKE), make may not notice that
the command is a recursive call, and will decide to close the jobserver
file-descriptors, yielding warnings like:
make[2]: warning: jobserver unavailable: using -j1. Add '+' to
parent make rule.
This causes the lcoale data to not be generated in parallel, which is
initially all the fuss about using a sub-makefile...
So, do as suggested, and prepend the hook with a '+', so that it is
explicit to make that it should not close its jobserver fds.
Fixes: 6fbdf51596 (Makefile: Parallelize glibc locale generation)
Signed-off-by: Yann E. MORIN <yann.morin@orange.com>
Cc: Gleb Mazovetskiy <glex.spb@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
USe $(error) to simplify the code (drop "exit 1") and sned the message
to stderr.
Reported-by: David Laight <David.Laight@ACULAB.COM>
Reported-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Quentin Schulz <quentin.schulz@theobroma-systems.com>
When one is applying patches, it is pretty common to end up with .orig
and/or .rej files lying around. Unfortunately, our 'Config.*' match in
check-package ends up matching those files, causing false positives
when running "make check-package". To avoid this, this commit
excludes *.orig and *.rej files for the find logic used in the
check-package target.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
printvars returns nothing when VARS is not passed or empty. This is done
on purpose, see commit fd5bd12379 ("Makefile: printvars: don't print
anything when VARS is not set").
An error message making explicit what is required from the user in order
to use printvars is however better than silently doing nothing.
This adds a check for a non-empty VARS variable.
Cc: Quentin Schulz <foss+buildroot@0leil.net>
Signed-off-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Commit 5c54c3ef3d (Makefile: workaround make 4.3 issue for 'printvars
and 'show-vars') did not fully fix the show-vars case, which still
segfaults.
Overall, show-vars generates a JSON blurb. That is supposed to be
machine-readable, so we do not care that the variables are sorted, so
we get rid of it to (slightly) simplify the code.
Then, we currently iterate twice on the list of variables: the first one
to filter-out the 'internal' variables, and the second one to filter
only the variables matching the pattern. We can do away by iterating
only once, and applying both filters at once.
Since we now have an 'and' condition, we can take advantage of it: when
none of the items in $(and) are empty, $(and) evaluates to the last
item, while it evaluates to empty if any of the items is empty. So we
can coalesce the $(if) and $(and) together: $(if $(and a,b),c) is
equivalent to: $(and a,b,c) ; this gains us one parentheses depth.
Finally, the cause for the segfault is an overly-long call to $(info).
Reducing that is not easy: we want to call clean-json on the whole of
the JSON blurb, so we can't emit the individual variables one by one, or
the trailing comma would not be trimmed away.
So, we go crazy: we just output each word from clean-json with $(info).
We can do that, because mk-json-str transforms all spaces in a string
to an escaped UTF-8 sequence, so we will never have spaces in values;
the keys are the variables, so they won't have spaces either; spaces in
the rest of the JSON blurb are totally optional, so we don't care how
many there are. We know there are spaces, because we explicitly
introduce some (after "expanded" or "raw", for example), so we should
never hit a too-big word for $(info) to print.
Thanks to Henri for the suggestion to push $(info) further inside the
macro.
Reported-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Roosen Henri <Henri.Roosen@ginzinger.com>
Cc: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Event though the bug with make 4.3 has been reported and fixed, there
has not been a release of make with the fix for a long time, see [1].
As the root cause seems the 'filter' command cannot handle large
chunks of data, like .VARIABLES, we can workaround the problem by
using a foreach command over .VARIABLES, then use the filter command.
It might not be logical to program it that way, but at least the
functionality is now usable.
[1] https://savannah.gnu.org/bugs/?59093#comment10
Signed-off-by: Henri Roosen <henri.roosen@ginzinger.com>
[yann.morin.1998@free.fr: add comment to reference the bug]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>