Add a new option that prints the (runtime) path of compiled .py files
when VERBOSE=1 is set.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When generating a .pyc file, the original .py source file path is
encoded in it. It is used for various purposes: traceback generation,
.pyc file comparison with its .py source, and code inspection.
By default, the source path used when invoking compileall is encoded in
the .pyc file. Since we use paths relative to TARGET_DIR, we end up with
paths that are only valid when relative to '/' encoded in the installed
.pyc files on the target.
This breaks code inspection at runtime since the original source path
will be invalid unless the code is executed from '/'.
Unfortunately, compileall cannot be forced to use the proper path. It
was not written with cross-compilation usage in mind.
Rework the script to call py_compile.compile() directly with pertinent
options:
- The script now has a new --strip-root argument. This argument is
optional but will always be specified when compiling py files in
buildroot.
- All other (non-optional) arguments are folders in which all
"importable" .py files will be compiled to .pyc.
- Using --strip-root=$(TARGET_DIR), the future runtime path of each .py
file is computed and encoded into the compiled .pyc.
No need to change directory before running the script anymore.
The trickery used to handle error reporting was only applicable with
compileall. Since we implement our own "compileall", error reporting
becomes trivial.
Previously, we had a --force option to tell compileall.compiledir() to
forcibly recompile files if they had changed. Now, we would have to
handle it ourselves. It turns out to not be easy and would need us to
delve into the format of bytecompiled files to extract metadata and
compare it with the expected values, that being even dependent on the
python version being used (fortunately, only two for us: python 2.7 and
the latext 3.x).
Still, this is deemed too complex, and byte-compiling is pretty fast, so
much so that it should be eclipsed by the build duration anyway.
So we just drop support for --force, and instead we always byte-compile.
Signed-off-by: Julien Floret <julien.floret@6wind.com>
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
[yann.morin.1998@free.fr:
- always byte-compile
- drop --force
- expand commit log to state so and explain why
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Only run code when the script is executed directly (not imported).
Factorize command description by using the script's __doc__ variable.
Fix typo in --force help message.
Signed-off-by: Robin Jarry <robin.jarry@6wind.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Currently, the check of defconfigs is run for all branches, even those
that are pushed only to run runtime tests. This is very inconvenient.
In fact, we only want to check the defconfigs on standard branches, that
is master, next, and the maintenance branches.
This will also decrease drastically the number gitlab-ci minutes used
when one pushes their repo to gitlab.com, where the number of CI minutes
are now going to be pretty severely restricted.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that those tests were so far ignored only when requesting a single
defconfig build, or a single runtime test build; everything else
was trigerring thoses tests.
However, it feels more natural that they are also ignored when all
defconfigs build. or all runtime tests, are explictly requested.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that we do not propagate the existing comment, because it is
partially wrong; instead we just keep the per-condition comments.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When we build the defconfigs, we already check they are correct, so
there is no need to run the correctness check explicitly.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Acked-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that we do not propagate the existing comment, because it is
partially wrong; instead we just keep the per-condition comments.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Currently, the image name and version are duplicated in the main
pipeline and the generated, child pipeline.
This is a condition for a future gaffe, so let's use the image from the
main pipeline when generating the child one.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This script is currently very crude, but we're going to extend it, at
which point it will be nicer to have functions, local variables, et al.
Introduce a main() in preparation of those future evolutions.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Note that one is silenced, rather than fixed: we indeed need to import
after we add the local directory to the modules search path.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Current X.org X server is incompatible with this driver.
We no longer support unmaintainted versions of X.org X server.
Signed-off-by: Bernd Kuhls <bernd.kuhls@t-online.de>
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Drop the debug-level print as noticed by Titouan.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Titouan Christophe <titouan.christophe@railnova.eu>
https://toolchains.bootlin.com/ has been providing for a few years a
number of ready-to-use pre-built toolchains, for a wide range of
architectures (which it turns out, are all built using Buildroot).
While toolchains.bootlin.com provides Buildroot config fragments to
easily use those toolchains with Buildroot (see [0] for example), this
is not visible anywhere. So instead, we would like to add support for
these toolchains in Buildroot just like we have existing support for
Linaro, ARM, Synopsys, etc. toolchains.
[0] https://toolchains.bootlin.com/downloads/releases/toolchains/aarch64/fragments/aarch64--glibc--bleeding-edge-2020.02-2.frag
However, the number of toolchains provided by toolchains.bootlin.com
is really large, and they are regularly updated. Maintaining that
manually would be time consuming and error-prone. So instead, this
commit introduces a script that automatically generates:
- toolchain/toolchain-external/toolchain-external-bootlin/Config.in.options
- toolchain/toolchain-external/toolchain-external-bootlin/toolchain-external-bootlin.mk
- toolchain/toolchain-external/toolchain-external-bootlin/toolchain-external-bootlin.hash
- support/testing/tests/toolchain/test_external_bootlin.py
We create a single external toolchain package, with a Kconfig "choice"
as a sub-option to select the toolchain variant to be used. The script
contains a Python dict that provides the mapping between the
toolchains provided by toolchains.bootlin.com, and the architecture
options/variants they are applicable to.
The test cases allow to verify that the toolchain configuration is
correct, and that it is able to build a Busybox based system. It
doesn't do any runtime testing as such testing is already done by
toolchains.bootlin.com: the test cases here are only meant to verify
that the toolchain-external-bootlin package works as expected.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Reviewed-by: Titouan Christophe <titouan.christophe@railnova.eu>
Tested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This scripts takes as entry on stdin a JSON description of the package
used for a given configuration. This description is the one generated
by "make show-info".
The script generates the list of all the packages used and if they are
affected by a CVE. The output is either a JSON or an HTML file similar
to the one generated by pkg-stats.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Tested-by: Matthew Weber <matthew.weber@rockwellcollins.com>=
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The affects method of the CVE uses the Package class defined in
pkg-stats. The purpose of migrating the CVE class outside of pkg-stats
was to be able to reuse it from other scripts. So let's remove the
Package dependency and only use the needed information.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
In 2019, the JSON vulnerability feeds switched their schema from
version 1.0 to 1.1.
The main difference is the removal of the "affects" element that we
were using to check if a package was affected by a CVE.
This information is now available in the "configuration" element which
contains the cpeid as well as properties about the versions
affected. Instead of having a list of the versions affected, with
these properties, it is possible to have a range of versions.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
In order to be able to use the CVE checking logic outside of
pkg-stats, move the CVE class in a module that can be used by other
scripts.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Some CVE entries in the NVD database have version_value set to "-",
which seems to indicate that it applies to all versions of the
software project, or that they don't really know which versions are
affected, and which are not.
So, for the benefit of doubt, it seems more appropriate to consider
such CVEs as affecting our packages.
This makes the total number of CVEs affecting our next branch jump
from 141 CVEs to 658 CVEs, but that number will go back down once we
switch to the JSON 1.1 schema. Indeed, in the JSON 1.0 schema, there
are often cases where a version_value is set to "=" *and* specific
versions are set to.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit slightly improves the output of pkg-stats by showing the
progress of the upstream URL checks and latest version retrieval, on a
package basis:
Checking URL status
[0001/0062] curlpp
[0002/0062] cmocka
[0003/0062] snappy
[0004/0062] nload
[...]
[0060/0062] librtas
[0061/0062] libsilk
[0062/0062] jhead
Getting latest versions ...
[0001/0064] libglob
[0002/0064] perl-http-daemon
[0003/0064] shadowsocks-libev
[...]
[0061/0064] lua-flu
[0062/0064] python-aiohttp-security
[0063/0064] ljlinenoise
[0064/0064] matchbox-lib
Note that the above sample was run on 64 packages. Only 62 packages
appear for the URL status check, because packages that do not have any
URL in their Config.in file, or don't have any Config.in file at all,
are not checked and therefore not accounted.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit reworks the code that checks if the upstream URL of each
package (specified by its Config.in file) using the aiohttp
module. This makes the implementation much more elegant, and avoids
the problematic multiprocessing Pool which is causing issues in some
situations.
Suggested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This commit reworks the code that retrieves the latest upstream
version of each package from release-monitoring.org using the aiohttp
module. This makes the implementation much more elegant, and avoids
the problematic multiprocessing Pool which is causing issues in some
situations.
Since we're now using some async functionality, the script is Python
3.x only, so the shebang is changed to make this clear.
Suggested-by: Titouan Christophe <titouan.christophe@railnova.eu>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Remove patch that is no longer needed as of upstream commit
1c33be992e8120abd20add8021e4d91d226f5b6a which removed the old VM.
We need to add an exclusion rule for guile modules to check-bin-arch
as they appear as valid ELF binaries but with an architecture of
"None".
Signed-off-by: James Hilliard <james.hilliard1@gmail.com>
[Thomas:
- bump to 3.0.4
- rework how check-bin-arch excludes checking the Guile .go files]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
BR2_VERSION_FULL is currently defined as follows:
BR2_VERSION_FULL := $(BR2_VERSION)$(shell $(TOPDIR)/support/scripts/setlocalversion)
This BR2_VERSION_FULL value then gets used as the "VERSION" variable
in the /etc/os-release file.
The logic of "setlocalversion" is that if it is exactly on a tag, it
returns nothing.
If it is on a tag + a number of commits, then it returns only
-XYZ-gABC where XYZ is the number of commits since the last tag, and
ABC the git commit hash (these are extracted from git describe).
This output then gets concatenated to BR2_VERSION which gives
something like 2020.05 or 2020.05-00123-g5bc6a.
The issue is that when you're on a tag specific to your project, which
is not a Buildroot YYYY.MM tag, then the output of setlocalversion is
empty, and all you get as VERSION in os-release is $(BR2_VERSION)
which is not really nice. Worse, if you have another non-official
Buildroot tag between the last official Buildroot tag/version and
where you are, you will get $(BR2_VERSION)-XYZ-gABC, but XYZ will not
correspond to the number of commits since BR2_VERSION, but since the
last tag that "git describe" as found, which is clearly incorrect.
Here is an example: you're on master, "make print-version" (which
displays BR2_VERSION_FULL) will show:
$ make print-version
2020.08-git-00758-gc351877a6e
So far so good. Now, you create a tag say 5 commits "before" master,
and show BR2_VERSION_FULL again:
$ git tag -a -m "dummy tag" dummy-tag HEAD~5
$ make print-version
2020.08-git-00005-gc351877a6e
This makes you believe you are 5 commits above 2020.08, which is
absolutely wrong.
So this commit simplifies the logic of setlocalversion to simply
return what "git describe" provides, and not prepend $(BR2_VERSION) in
the main Makefile. Since official Buildroot tags match official
Buildroot version names, you get the same output when you're on an
official Buildroot tag, or some commits above a Buildroot tag. An in
other cases, you get a sensible output. The logic is also adjusted for
the Mercurial case.
In the above situation, with this commit applied, we get:
$ make print-version
dummy-tag-6-g6258cdddeb
(6 commits instead of 5 as we have this very commit applied, but at
least it's 6 commits on top of the dummy-tag)
Finally, if you're not using a version control system, setlocalversion
was already returning nothing, so in this case, the Makefile simply
sets BR2_VERSION_FULL to BR2_VERSION to preserve this behavior.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The defconfig check has been introduced by the previous
patch before the building each defconfig but those builds
are done every week or more.
Checking if a defconfig is valid can be done on every
push in the repository since it take few seconds.
This would allow to detect as soon as possible a problem
in a defconfig and eventually avoid breaking the build
while build testing all defconfig.
Introduce a new job template ".defconfig_check" in
gitlab-ci.yml.in and modify the generate-gitlab-ci-yml
to create a job for each defconfig to run the test.
Although, we could have used only one job to do all
tests, using one job per defconfig allow to identify
easily in gitlab which defconfig is falling.
Tested:
https://gitlab.com/kubu93/buildroot/pipelines/138331069https://gitlab.com/kubu93/buildroot/pipelines/171223758
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
For the same reason as for 50b747f212,
we need to check if the generated configuration file (.config)
contains all symbols present in the defconfig file.
If not there is an issue with the defconfig.
This script will be used in .gitlab-ci.yml.
Inspired by is_toolchain_usable() function from genrandconfig:
https://git.busybox.net/buildroot/tree/utils/genrandconfig?h=2020.02#n164
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
[yann.morin.1998@free.fr:
- strip defconfig lines when reading them
- use a generator to read the defconfig lines
- no need to strip() again when building the missing list
- testing the list directly, not its len()
- simply sys.exit(1) in the error condition
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This fixes the following flake8 warning:
support/scripts/pkg-stats:1005:9: E117 over-indented
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
With python 3, when a package has a version number x-y-z instead of
x.y.z, then the version returned by LooseVersion can't be compared
which raises a TypeError exception:
Traceback (most recent call last):
File "./support/scripts/pkg-stats", line 1062, in <module>
__main__()
File "./support/scripts/pkg-stats", line 1051, in __main__
check_package_cves(args.nvd_path, {p.name: p for p in packages})
File "./support/scripts/pkg-stats", line 613, in check_package_cves
if pkg_name in packages and cve.affects(packages[pkg_name]):
File "./support/scripts/pkg-stats", line 386, in affects
return pkg_version <= cve_affected_version
File "/usr/lib64/python3.8/distutils/version.py", line 58, in __le__
c = self._cmp(other)
File "/usr/lib64/python3.8/distutils/version.py", line 337, in _cmp
if self.version < other.version:
TypeError: '<' not supported between instances of 'str' and 'int'
This patch handles this exception by adding a new return value when
the comparison can't be done. The code is adjusted to take of this
change. For now, a return value of CVE_UNKNOWN is handled the same way
as a CVE_DOESNT_AFFECT return value, but this can be improved later
on.
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The error is misleading: it reports that no name was provided,
when in fact the external.desc file is missing.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>p
Reviewed-by: Romain Naour <romain.naour@gmail.com>
When a br2-external tree has an issue, e.g. a missing file, or does not
have a name, or the name uses invalid chars, we report that condition by
setting the variable BR2_EXTERNAL_ERROR.
That variable is defined in the script support/scripts/br2-external,
which outputs it on stdout, and checked by the Makefile.
Before d027cd75d0, stdout was explicitly redirected to the generated
.mk file, with exec >"${ofile}" as the Makefile and Kconfig
fragments were generated each with their own call to the script, and
the validation phase would emit the BR2_EXTERNAL_ERROR variable in the
Makefile fragment.
But with d027cd75d0, both the Makefile and Kconfig fragments were now
generated with a single call to the script, and as such the semantics of
the scripts changed, and only each of the actual generators, do_mk and
do_kconfig, had their out put redirected. Which left do_validate with
the default stdout. Which would emit BR2_EXTERNAL_ERROR on stdout.
In turn, the stdout of the script would be interpreted by as part of the
Makefile. But this does not end up very well when a br2-external tree
indeed has an error:
- missing a external.desc file:
Makefile:184: *** multiple target patterns. Stop.
- empty external.desc file:
Config.in:22: can't open file "output/.br2-external.in.paths"
So we must redirect the output of the validation step to the
Makefile fragment, so that the error message is correctly caught by the
top-level Makefile.
Note that we don't need to append in do_mk, and we can do an overwrite
redirection: if we go so far as to call do_mk, it means there was no
error, and thus the fragment is empty.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Tested-by: Romain Naour <romain.naour@gmail.com>
As reported by a gitlab runtime test [1] and on the mailing list
[2], some runtime tests are failing on slow host machines when
the qemu-system-<arch> is missing on the host.
The boot-qemu-image.py script need to wait some time after
calling pexpect.spawn() in order to make sure that the qemu
process has been executed in start-qemu.sh.
If start-qemu.sh failed due to missing qemu-system binary
an exception will be thrown by child.expect() and should be
catched by the error handling (pexpect.EOF).
After spending a lot of time to investigate with Yann E. MORIN
[3]. It seems that short-lived child processes are a corner-case
that is not very correctly handled...
Without adding a sleep(1), child.expect() can trigger an
exception before setting the exitstatus of the spawned
process. This issue can be reproduced on a gitlab runner or
by adding "exit 1" in the first line of start-qemu.sh
(after the shebang).
There is even the same workaround in some pexpect examples [4].
Thanks to Yann for the help while investigating the issue.
Tested:
https://gitlab.com/kubu93/buildroot/pipelines/138472925
[1] https://gitlab.com/kubu93/buildroot/pipelines/135487475
[2] http://lists.busybox.net/pipermail/buildroot/2020-April/280037.html
[3] http://patchwork.ozlabs.org/project/buildroot/patch/20200418161023.1221799-1-romain.naour@gmail.com/
[4] https://github.com/pexpect/pexpect/blob/master/examples/ssh_tunnel.py#L80
Fixes:
https://gitlab.com/kubu93/buildroot/-/jobs/509053135
Signed-off-by: Romain Naour <romain.naour@gmail.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
[yann.morin.1998@free.fr: reorder imports]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
This script is intended to be used by gitlab CI to test at runtime Qemu
images generated by Buildroot's Qemu defconfigs.
This allows to troubleshoot different issues that may be associated with
defective builds by lanching a qemu machine, sending root password,
waiting for login shell and then perform a shutdown.
This script is inspired by toolchain builder [1] and the Buildroot
testing infrastructure.
The gitlab CI will call this script for each defconfig build but only
Qemu defconfig will be runtime tested, all others defconfig are ignored.
Some Qemu defconfig must be used with a specific Qemu version (fork)
that is not always available, so the script doesn't error out when it
can't spawn a missing command. That condition is anyway printed in the
log.
Finally, the script start Qemu like it's done for the Buildroot
testing infrastructure (using pexpect).
Note:
We noticed some timeout issues with pexpect when the Qemu machine is
powered off. That's because Qemu process doesn't stop even if the
system is halted (after "System halted"). So the script doesn't error
out when such timeout occure. The behaviour depends on the architecture
emulated by Qemu.
[1] https://github.com/bootlin/toolchains-builder/blob/master/build.sh
Signed-off-by: Jugurtha BELKALEM <jugurtha.belkalem@smile.fr>
Signed-off-by: Romain Naour <romain.naour@smile.fr>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
When the 'nvd-path', 'json' and 'html' are used like this:
--html ~/foo
then the tilde expansion is properly done by the shell. However, when
they are used like this:
--html=~/foo
The shell doesn't do the tilde expansion, and pkg-stats doesn't do
it. This commit modifies pkg-stats to ensure that tilde expansion is
done when parsing the 'nvd-path', 'json' and 'html' arguments.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
[Thomas: improve commit log]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
flake8 complains with:
support/scripts/pkg-stats:339:13: E722 do not use bare 'except'
Due to the construct:
try:
something
except:
print("some message")
raise
Which is in fact OK because the exception is re-raised. This issue is
discussed at https://github.com/PyCQA/pycodestyle/issues/703, and the
general agreement is that these "bare except" are OK, and should be
ignored from flake8 using a noqa statement.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
flake8 complains with:
pkg-stats:38:1: E402 module level import not at top of file
This is due to sys.path.append() being before the import from
getdeveloperlib, but we really need this sys.path.append() to be
before, so let's ignore this flake8 warning.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The C program inside check-kernel-headers.sh has two checking mode: a
strict and a loose one.
In strict mode, we want the kernel headers version declared by the
user to match exactly the one of the toolchain.
In loose mode, we want the kernel headers version of the toolchain to
be greater than or equal to the one declared by the user: this is used
when we have a toolchain that has newer headers than the latest
version known by Buildroot.
However, in loose mode, we continue to show the "Incorrect kernel
headers version" message, even though we then return a zero error
code. This is very confusing: you see an error displayed on the
terminal, but the build goes on.
We fix that by first doing the loose check first, and returning 0 if
it succeeds. And then we move on with the strict check where we want
the version to be identical.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
If there is no infra set or infra is virtual the status is set to 'na'.
This is done for the follwing checks:
- license
- license-files
- hash
- hash-license
- patches
- version
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
This value can be used for later processing.
In the buildroot-stats application this is used to create links pointing
to the git repo of buildroot.
Signed-off-by: Heiko Thiery <heiko.thiery@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>