In commit 04154a6517 (support/download/cargo-post-process: cargo
output for vendor config), we switched away from our hand-crafted
cargo.toml mangling, to use cargo itself to update that file.
In doing so, we enabled the shell pipefail option, so that we could
catch cargo failures, while redirecting its output through tee to the
cargo.toml.
However, pipefail is overzealous, and will hit us even for pipes we do
not want to globally fail, like the one that actually checks whether an
archive is already vendored or not:
if tar tf "${output}" | grep -q "^[^/]*/VENDOR" ; then
...
with pipefail, the above may always fail:
- if the tarball is already vendored, grep will exit on the first
match because of -q (it only needs a single match to decide that its
return code will be zero), so the | will get closed, and tar may
get -EPIPE before it had a chance to finish listing the archive, and
thus would terminate in error;
- if the tarball is not vendored, grep will exit in error.
It turns out that the tee was only added so that we could see the
messages emitted by cargo, and still fill the cargo.tom with the output
of cargo.
But that's a bit overkill: the cargo messages are going to stderr, and
the blurb to add to cargo.toml to stdout, so we just need to redirect
stdout.
Yes, we do not see what cargo added to cargo.toml, but that is not so
interesting.
Still, cargo ends its messages with a suggestion for the user to modify
cargo.toml, with:
To use vendored sources, add this to your .cargo/config.toml for this project:
But since we've already redirected that to cargo.toml, there is nothing
for the user to edit, so the above can get confusing. Emit a little
blurb that states that everything is under control.
And then we can drop pipefail.
Note: the go-post-process initially had pipefail too, but it was dropped
in bfd1a31d0e (support/download/go-post-process: drop -o pipefail) as
it was causing spurious breakage when extracting the archive before
vendoring, so it is only reasonable that we also remove it from the
cargo-post-process.
Reported-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Simon Richter <simon.richter@ptwdosimetry.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Fixes:
http://autobuild.buildroot.net/results/12a/12a63ae177fe3ed0c9a1ef2fa01870f334f36b0f/
Currently, when the post-process helper fails while downloading from
upstream, there is no fallback to the backup mirror.
In case the post-process helper fails, we must consider that to be a
download failure, so we must bail out as if the download backend itself
did fail, but we fail to do so.
Duplicate the logic we have for the download helper: if the post-process
helper fails, remove the downloaded stuff, and continue on to the next
URI, which will ultimately hit the backup mirror (if one has been
configured).
Reported-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Fixes:
http://autobuild.buildroot.net/results/820/820e98b1c126469b1f180f078d102ded43b9c40e/
scripts/Makefile.am of mosh-1.4.0 needs the perl diagnostics module on the host:
make[3]: Entering directory '/home/buildroot/autobuild/instance-2/output-1/build/mosh-1.4.0/scripts'
perl -Mdiagnostics -c ./mosh.pl
Can't locate diagnostics.pm in @INC (you may need to install the diagnostics module) (@INC contains: /home/buildroot/autobuild/instance-2/output-1/host/lib/perl /usr/local/lib64/perl5/5.36 /usr/local/share/perl5/5.36 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5).
BEGIN failed--compilation aborted.
So add a check for it in dependencies.sh similar to the other perl modules.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
There are three ways to run chronyd:
- start as root, and continue running as root;
- start as root, then setuid() to a non-root user via either a command
line option or a configuration directive;
- start as root, and setuid() to a build-time specified non-root user.
Currently, the first situation is used by Buildroot, which does not
follow security best practices of dropping elevated privileges for
daemon at runtime when that is possible.
We switch to the third situation, where a compile-time default non-root
user is then used at runtime to drop privileges, with libcap used to
keep the capabilities required to call the appropriate syscalls to
adjust the system time (typically, CAP_SYS_TIME to call adjtimex() or
clock_settime() et al.).
This means that libcap is now a mandatory dependency.
To be noted: users who previously had configured their systems to run
chronyd as non root, would have done so with either the command-line
option (`-u`), or the configuration directive (`user`). Those take
precedence over the compile-time default, so this should not break their
systems (presumably, they also run as the `chrony` user). They would
also have taken care to run chronyc as the appropriate user to
manipulate chronyd at runtime via the UNIX socket.
For those who were running chronyd as root, this does not change either:
the functionality is unchanged, and they were running chronyc as root,
which should still be capable of manipulating chronyd via its UNIX
socket.
Take that opportunity to brine chrony's Config.in to current coding
style: enclose sub-option in an if-endif block.
Signed-off-by: James Kent <james.kent@orchestrated-technology.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
The size of xipImage has grown by 84KB but there are still 278KB left
before running out of 2MB of flash memory.
Signed-off-by: Dario Binacchi <dario.binacchi@amarulasolutions.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Further fix on top on 9d1b223b91 (package/pkg-waf: add missing $).
Repeat after me: all variables in an inner-package macro must be
expanded, except for: parameters, pkgdir, and pkgname.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
We want to have $(@D) expanded at the time the recipe is run, so like
all other variables, we need to $$-expand it.
Fixes: 1b4d7f6e13
Fixes: http://autobuild.buildroot.org/results/b6f/b6fd3a866af182edc7831492aecc8323f377b826
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This commit adds two new test cases:
- TestNodeJSBasic which builds a target configuration with just
NodeJS enabled, and which runs a very simple NodeJS script on the
target.
- TestNodeJSModule, which builds a target configuration with NodeJS
enabled + the installation of one extra module, which means npm on
the host (from host-nodejs) is used, and which runs a very simple
NodeJS script on the target that uses this extra module.
Having both tests separately allows to validate that both nodejs-only
and nodejs+host-nodejs configurations behave correctly, at least in
minimal scenarios.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
By default, NodeJS searches global modules in /usr/lib/node, but NPM
installs them in /usr/lib/node_modules/. Therefore by default, if one
installs modules with BR2_PACKAGE_NODEJS_MODULES_ADDITIONAL, they are
not accessible by NodeJS, unless by passing a
NODE_PATH=/usr/lib/node_modules/ variable. Since this is not obvious,
and it's nicer when things work out of the box, we simply patch NodeJS
to look for modules at the right place.
See
https://stackoverflow.com/questions/15636367/nodejs-require-a-global-module-package
for some discussions on this topic.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reviewed-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
We're not using next branch so let's rename label linux-next to linux.
Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
During the last U-boot version bump it's not been noted that the TPL
was not prepended to SPL anymore preventing the board to boot, so
let's copy TPL to the image folder, prepend it to u-boot-spl-dtb.bin
and place it at offset 32KB, where RK3288 bootrom expects to find
it. Let's also place u-boot-dtb separated from SPL at offset 8M, where
the SPL expects it to find it.
Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Elixir was a dependency of rabbitmq-server which got dropped in
89815bad0a. It is a host package with no other
users, hence it is no longer required. Additionally, newer versions require
Erlang 23+.
Signed-off-by: Frank Vanbever <frank.vanbever@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When generating legal info with a configuration enabling a package which uses
flit as setup type, we get a warning about python-flit-core license:
WARNING: python-flit-core-3.8.0: cannot save license (HOST_PYTHON_FLIT_CORE_LICENSE_FILES not defined)
Add missing variable to point to python-flit-core license file
Signed-off-by: Alexis Lothoré <alexis.lothore@bootlin.com>
[Peter: add sha256sum to .hash file]
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Wpewebkit needs cmake >= 3.20 when building with the make backend since
wpewebkit 3.8.0.
Cmake 3.20 is above our minimal version in
support/dependencies/check-host-cmake.mk, so this breaks builds on hosts
with cmake >= 3.18 < 3.20 - So use the ninja backend instead.
6cd89696b5
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Webkitgtk needs cmake >= 3.20 when building with the make backend since
webkitgtk 3.8.0.
Cmake 3.20 is above our minimal version in
support/dependencies/check-host-cmake.mk, so this breaks builds on hosts
with cmake >= 3.18 < 3.20 - So use the ninja backend instead.
6cd89696b5
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Fixes:
http://autobuild.buildroot.net/results/ee1/ee15cadf8af10dee6c83b9726a034367e8ae81a7/
The bundled waf script is too old (2.0.7) for python >= 3.11 as it uses the
'U' modifier to open (universal newlines), which have been deprecated since
python 3.3 and finally removed in 3.11.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Fixes:
http://autobuild.buildroot.net/results/5ce/5ce5ebd20e0e509b31b51d2ec1aed56fdb8f45aa/
The bundled waf script is too old (2.0.12) for python >= 3.11 as it uses the
'U' modifier to open (universal newlines), which have been deprecated since
python 3.3 and finally removed in 3.11.
Jack unfortunately uses a modified waf, so we cannot just set
JACK2_NEEDS_EXTERNAL_WAF, so instead backport an upstream patch fixing the
compatibility issue:
https://github.com/jackaudio/jack2/pull/884
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Fixes:
http://autobuild.buildroot.net/results/bbd/bbd90f11975b691f694412a6fc3446f37dd7c0aa/
The bundled waf script is too old (1.9.3) for python >= 3.11 as it uses the
'U' modifier to open (universal newlines), which have been deprecated since
python 3.3 and finally removed in 3.11.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
As that is now handled by the waf-package infrastructure.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Reviewed-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Waf requires that the version of the waf script matches the version of
waflib, so drop any bundled waf/waflib if _NEEDS_EXTERNAL_WAF is used, as
otherwise waf errors out with errors like:
Waf script '2.0.24' and library '1.9.3' do not match
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Reviewed-by: Romain Naour <romain.naour@smile.fr>
[Peter: Run as a post-patch hook as suggested by Yann]
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Explicitly set installed_tests to disabled.
Drop patch which is now upstream.
Signed-off-by: James Hilliard <james.hilliard1@gmail.com>
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
Instead of undefining endiannes CFLAGS let's change the approach.
Let's disable the CONFIG_PLATFORM_I386_PC that is set to y by default
involving the endianness to be set to little. This way we can set the
CFLAGS according to architecture with some default define like:
-DCONFIG_IOCTL_CFG80211
-DRTW_USE_CFG80211_STA_EVENT
-Wno-error
Suggested-by: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This version allows to build with Linux 6.1
Fixes:
Still not reported
Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Changelog (for details see [1] and [2]):
Changes between 1.1.1s and 1.1.1t [7 Feb 2023]
*) Fixed X.400 address type confusion in X.509 GeneralName.
There is a type confusion vulnerability relating to X.400 address processing
inside an X.509 GeneralName. X.400 addresses were parsed as an ASN1_STRING
but subsequently interpreted by GENERAL_NAME_cmp as an ASN1_TYPE. This
vulnerability may allow an attacker who can provide a certificate chain and
CRL (neither of which need have a valid signature) to pass arbitrary
pointers to a memcmp call, creating a possible read primitive, subject to
some constraints. Refer to the advisory for more information. Thanks to
David Benjamin for discovering this issue. (CVE-2023-0286)
This issue has been fixed by changing the public header file definition of
GENERAL_NAME so that x400Address reflects the implementation. It was not
possible for any existing application to successfully use the existing
definition; however, if any application references the x400Address field
(e.g. in dead code), note that the type of this field has changed. There is
no ABI change.
[Hugo Landau]
*) Fixed Use-after-free following BIO_new_NDEF.
The public API function BIO_new_NDEF is a helper function used for
streaming ASN.1 data via a BIO. It is primarily used internally to OpenSSL
to support the SMIME, CMS and PKCS7 streaming capabilities, but may also
be called directly by end user applications.
The function receives a BIO from the caller, prepends a new BIO_f_asn1
filter BIO onto the front of it to form a BIO chain, and then returns
the new head of the BIO chain to the caller. Under certain conditions,
for example if a CMS recipient public key is invalid, the new filter BIO
is freed and the function returns a NULL result indicating a failure.
However, in this case, the BIO chain is not properly cleaned up and the
BIO passed by the caller still retains internal pointers to the previously
freed filter BIO. If the caller then goes on to call BIO_pop() on the BIO
then a use-after-free will occur. This will most likely result in a crash.
(CVE-2023-0215)
[Viktor Dukhovni, Matt Caswell]
*) Fixed Double free after calling PEM_read_bio_ex.
The function PEM_read_bio_ex() reads a PEM file from a BIO and parses and
decodes the "name" (e.g. "CERTIFICATE"), any header data and the payload
data. If the function succeeds then the "name_out", "header" and "data"
arguments are populated with pointers to buffers containing the relevant
decoded data. The caller is responsible for freeing those buffers. It is
possible to construct a PEM file that results in 0 bytes of payload data.
In this case PEM_read_bio_ex() will return a failure code but will populate
the header argument with a pointer to a buffer that has already been freed.
If the caller also frees this buffer then a double free will occur. This
will most likely lead to a crash.
The functions PEM_read_bio() and PEM_read() are simple wrappers around
PEM_read_bio_ex() and therefore these functions are also directly affected.
These functions are also called indirectly by a number of other OpenSSL
functions including PEM_X509_INFO_read_bio_ex() and
SSL_CTX_use_serverinfo_file() which are also vulnerable. Some OpenSSL
internal uses of these functions are not vulnerable because the caller does
not free the header argument if PEM_read_bio_ex() returns a failure code.
(CVE-2022-4450)
[Kurt Roeckx, Matt Caswell]
*) Fixed Timing Oracle in RSA Decryption.
A timing based side channel exists in the OpenSSL RSA Decryption
implementation which could be sufficient to recover a plaintext across
a network in a Bleichenbacher style attack. To achieve a successful
decryption an attacker would have to be able to send a very large number
of trial messages for decryption. The vulnerability affects all RSA padding
modes: PKCS#1 v1.5, RSA-OEAP and RSASVE.
(CVE-2022-4304)
[Dmitry Belyavsky, Hubert Kario]
Changes between 1.1.1r and 1.1.1s [1 Nov 2022]
*) Fixed a regression introduced in 1.1.1r version not refreshing the
certificate data to be signed before signing the certificate.
[Gibeom Gwon]
Changes between 1.1.1q and 1.1.1r [11 Oct 2022]
*) Fixed the linux-mips64 Configure target which was missing the
SIXTY_FOUR_BIT bn_ops flag. This was causing heap corruption on that
platform.
[Adam Joseph]
*) Fixed a strict aliasing problem in bn_nist. Clang-14 optimisation was
causing incorrect results in some cases as a result.
[Paul Dale]
*) Fixed SSL_pending() and SSL_has_pending() with DTLS which were failing to
report correct results in some cases
[Matt Caswell]
*) Fixed a regression introduced in 1.1.1o for re-signing certificates with
different key sizes
[Todd Short]
*) Added the loongarch64 target
[Shi Pujin]
*) Fixed a DRBG seed propagation thread safety issue
[Bernd Edlinger]
*) Fixed a memory leak in tls13_generate_secret
[Bernd Edlinger]
*) Fixed reported performance degradation on aarch64. Restored the
implementation prior to commit 2621751 ("aes/asm/aesv8-armx.pl: avoid
32-bit lane assignment in CTR mode") for 64bit targets only, since it is
reportedly 2-17% slower and the silicon errata only affects 32bit targets.
The new algorithm is still used for 32 bit targets.
[Bernd Edlinger]
*) Added a missing header for memcmp that caused compilation failure on some
platforms
[Gregor Jasny]
[1] https://www.openssl.org/news/cl111.txt
[2] https://www.openssl.org/news/vulnerabilities.html
Signed-off-by: Peter Seiderer <ps.report@gmx.net>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Commit bed21bb9b added a patch to change configure.ac but failed to
update configure which caused build failures due to the timestamp
difference between configure and configure.ac and the makefile attempting
to run aclocal.
XZ_AUTORECONF = YES creates a circular dependency where the host autotools
need host-xz which also gets patched. Because of this, we need to patch
xz's configure script manually and NOT patch configure.ac so its timestamp
stays older than Makefile.in.
While we're doing this, correct the language in the commit body of the
patch, remove a stray whitespace, and fix the offset for configure.ac
Fixes: bed21bb9b ("package/xz: fix microblaze compiles")
Fixes: http://autobuild.buildroot.net/results/958/9586f21e447ef9923606b1385ff333138406b685/
Signed-off-by: Vincent Fazio <vfazio@xes-inc.com>
[Peter: Only patch configure]
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The update to v1.67.0 of rust broke the bootstrap build. This patch
applies an upstream patch to fix this:
3fe64ebbce
Fixes:
http://autobuild.buildroot.org/results/214/214fcbb3458893784b7f85b60f7ee1edb428c77f/build-end.log
Signed-off-by: Sebastian Weyer <sebastian.weyer@smile.fr>
Cc: Eric Le Bihan <eric.le.bihan.dev@free.fr>
Cc: James Hilliard <james.hilliard1@gmail.com>
Reviewed-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
All the errors in existing scripts in utils/ have been fixed, so nothing
needs to be added to .checkpackageignore.
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
In utils/test-pkg line 8:
if [ ! -z "${TEMP_CONF}" ]; then
^-- SC2236: Use -n instead of ! -z.
In utils/test-pkg line 75:
TEMP_CONF=$(mktemp /tmp/test-${pkg}-config.XXXXXX)
^----^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
TEMP_CONF=$(mktemp /tmp/test-"${pkg}"-config.XXXXXX)
In utils/test-pkg line 76:
echo "${pkg_br_name}=y" > ${TEMP_CONF}
^----------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
echo "${pkg_br_name}=y" > "${TEMP_CONF}"
In utils/test-pkg line 86:
if [ ${random} -gt 0 ]; then
^-------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
if [ "${random}" -gt 0 ]; then
In utils/test-pkg line 90:
if [ ${number} -gt 0 ]; then
^-------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
if [ "${number}" -gt 0 ]; then
In utils/test-pkg line 109:
toolchains=($(sed -r -e 's/,.*//; /internal/d; /^#/d; /^$/d;' "${toolchains_csv}" \
^-- SC2207: Prefer mapfile or read -a to split command output (or quote to avoid splitting).
In utils/test-pkg line 110:
|if [ ${random} -gt 0 ]; then \
^-------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
|if [ "${random}" -gt 0 ]; then \
In utils/test-pkg line 111:
sort -R |head -n ${random}
^-------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
sort -R |head -n "${random}"
In utils/test-pkg line 121:
if [ ${nb_tc} -eq 0 ]; then
^------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
if [ "${nb_tc}" -eq 0 ]; then
In utils/test-pkg line 134:
printf "%40s [%*d/%d]: " "${toolchain}" ${#nb_tc} ${nb} ${nb_tc}
^---^ SC2086: Double quote to prevent globbing and word splitting.
^------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
printf "%40s [%*d/%d]: " "${toolchain}" ${#nb_tc} "${nb}" "${nb_tc}"
In utils/test-pkg line 146:
${nb} ${nb_skip} ${nb_fail} ${nb_legal} ${nb_show}
^---^ SC2086: Double quote to prevent globbing and word splitting.
^--------^ SC2086: Double quote to prevent globbing and word splitting.
^--------^ SC2086: Double quote to prevent globbing and word splitting.
^---------^ SC2086: Double quote to prevent globbing and word splitting.
^--------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
"${nb}" "${nb_skip}" "${nb_fail}" "${nb_legal}" "${nb_show}"
In utils/test-pkg line 160:
CONFIG_= support/kconfig/merge_config.sh -O "${dir}" \
^-- SC1007: Remove space after = if trying to assign a value (for empty string, use var='' ... ).
In utils/test-pkg line 181:
if [ ${prepare_only} -eq 1 ]; then
^-------------^ SC2086: Double quote to prevent globbing and word splitting.
Did you mean:
if [ "${prepare_only}" -eq 1 ]; then
For more information:
https://www.shellcheck.net/wiki/SC1007 -- Remove space after = if trying to...
https://www.shellcheck.net/wiki/SC2207 -- Prefer mapfile or read -a to spli...
https://www.shellcheck.net/wiki/SC2086 -- Double quote to prevent globbing ...
The suggestions from shellcheck can be applied.
This script already uses bash so we can rely on mapfile.
The warning about CONFIG_= assignment misinterpreted the intention: we
don't want to assign to CONFIG_, we want to clear it from the
environment. Spell this as CONFIG_="".
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>
In utils/docker-run line 10:
--user $(id -u):$(id -g) \
^------^ SC2046: Quote this to prevent word splitting.
^------^ SC2046: Quote this to prevent word splitting.
The suggestions from shellcheck can be applied.
Signed-off-by: Arnout Vandecappelle <arnout@mind.be>