Add a new package variable that packages can set to specify that they
need git submodules.
Only accept this option if the download method is git, as we can not get
submodules via an http download (via wget).
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Aleksandar Simeonov <aleksandar@barix.com>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Tested-By: Nicolas Cavallari <nicolas.cavallari@green-communications.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Some git repositories may be split into a master repository and
submodules. Up until now, we did not have support for submodules,
because we were using bare clones, in which it is not possible to
update the list of submodules.
Now that we are using plain clones with a working copy, we can retrieve
the submdoules.
Add an option to the git download helper to kick the update of
submodules, so that they are only fetched for those packages that
require them. Also document the existing -q option at the same time.
Submodules have a .git file at their root, which contains the path to
the real .git directory of the master repository. Since we remove it,
there is no point in keeping those .git files either.
Note: this is currently unused, but will be enabled with the follow-up
patch that adds the necessary parts in the pkg-generic and pkg-download
infrastructures.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
We currently use git-archive to generate the tarball. This is all handy
and dandy, but git-archive does not support submodules. In the follow-up
patch, we're going to handle submodules, so we would not be able to use
git-archive.
Instead, we manually generate the archive:
- extract the tree to the requested cset,
- get the date of the commit to store in the archive,
- store only numeric owners,
- store owner and group as 0 (zero, although any arbitrary value would
have been fine, as long as it's a constant),
- sort the files to store in the archive.
We also get rid of the .git directory, because there is no reason to
keep it in the context of Buildroot. Some people would love to keep it
so as to speed up later downloads when updating a package, but that is
not really doable. For example:
- use current Buildroot
- it would need foo-12345, so do a clone and keep the .git in the
generated tarball
- update Buildroot
- it would need foo-98765
For that second clone, how could we know we would have to first extract
foo-12345 ? So, the .git in the archive is pretty much useless for
Buildroot.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently, we are using bare clones, so as to minimise the disk usage,
most notably for largeish repositories such as the one for the Linux
kernel, which can go beyond the 1GiB barrier.
However, this precludes updating (and thus using) the submodules, if
any, of the repositories, as a working copy is required to use
submodules (becaue we need to know the list of submodules, where to find
them, where to clone them, what cset to checkout, and all those is
dependent upon the checked out cset of the father repository).
Switch to using /plain/ clones with a working copy.
This means that the extra refs used by some forges (like pull-requests
for Github, or changes for gerrit...) are no longer fetched as part of
the clone, because git does not offer to do a mirror clone when there is
a working copy.
Instead, we have to fetch those special refs by hand. Since there is no
easy solution to know whether the cset the user asked for is such a
special ref or not, we just try to always fetch the cset requested by
the user; if this fails, we assume that this is not a special ref (most
probably, it is a sha1) and we defer the check to the archive creation,
which would fail if the requested cset is missing anyway.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In most cases Python's package dependencies found in setup.py are
runtime dependencies and hence don't need to be mentioned in *.mk
file.
Also add '# runtime' tag to select statements in Config.in.
__create_mk_requirements() itself is left for future uses (cffi backend
handling etc.).
Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, the legal-info infra only saves the source archive of a
package. However, that's not enough as we may apply some patches on
packages sources.
We do suggest users to also redistribute the Buildroot sources as part
of their compliance distribution, so the patches bundled in Buildroot
would indeed be included in the compliance distribution.
However, that's still not enough, since we may download some patches, or
the user may use a global patch directory. Patches in there might not
end up in the compliance distribution, and there are risks of
non-conformity.
So, always include patches alongside the source archive.
To ensure reproducibility, we also generate a series file, so patches
can be re-applied in the correct order.
We get the list of patches to include from the list of patches that were
applied by the package infrastructure (via the apply-patches support
script). So, we need to get packages properly extracted and patched
before we can save their legal-info, not just in the case they define
_LICENSE_FILES.
Update the legal-info header accordingly.
Note: this means that, when a package is not patched and defines no
LICENSE_FILES, we will extract and patch it for nothing. There is no
easy way to know whether we have to patch a package or not. We can only
either duplicate the logic to detect patches (bad) or rely on the infra
actually patching the package. Also, a vast majority of packages are
either patched, or define _LICENSE_FILES, so it is best and easiest to
always extract and patch them prior to legal-info.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Luca Ceresoli <luca@lucaceresoli.net>
Tested-by: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Patches we save can come from various locations:
- bundled with Buildroot
- downloaded
- from one or more global-patch-dir
It is possible that two patches lying into different locations have the
same basename, like so (first is bundled, second is from an hypothetical
global-patch-dir):
package/foo/0001-fix-Makefile.patch
/path/to/my/patches/foo/0001-fix-Makefile.patch
In that case, when running legal-info, we'd save only the second patch,
overwriting the first. That would be problematic, because:
- either the second patch depends on the first, and thus would no longer
apply (this is easy to detect, though),
- or the second patch does not depend on the first, and the compliance
delivery will not be complete (this is much harder to detect).
We fix that by checking that no two patches have the same same basename.
If we find that the basename of the patch to be applied collides with
that of a previously applied patch, we error out and report the duplicate.
The unfortunate side-effect is that existing setups will now break in
that situation, but that's a minor, corner-case issue that is easily
fixed.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Luca Ceresoli <luca@lucaceresoli.net>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
[Thomas: adjust coding style, fix minor typos in the commit log.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, we only store the filename of the applied patches.
However, we are soon to want to install those patches in the legal-info
directory, so we'll have to know where those patches come from.
Instead of duplicating the logic to find the patches (bundled,
downloaded, from a global patch dir...), just store the full path to
each of those patches so we can retrieve them more easily later on.
Also always create the list-file, even if empty, so that we need not
test for its existence before reading it.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: Luca Ceresoli <luca@lucaceresoli.net>
[Tested only with patches in the Buildroot sources]
Tested-by: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
[Thomas: used $PWD instead of $(pwd), as suggested by Arnout.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
A utility for creating python package from the python package index.
It fetches packages info from http://pypi.python.org and generates
corresponding packages files.
Signed-off-by: Denis THULIN <denis.thulin@openwide.fr>
Tested-by: Carlos Santos <casantos@datacom.ind.br>
Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com>
[Thomas: minor tweaks.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, each python package (be it the python interpreter package
itself or external python modules) is responsible for compiling its
.py into .pyc files. Unfortunately, this is not ideal as some packages
only install .py files without compiling them into .pyc files. In this
case, if the Buildroot configuration specifies to keep only the .pyc
files, the .py files are removed and lost.
To address this, this commit changes the logic by making the
compilation of .pyc files a global operation: the python interpreter
packages register a target finalize hook that is in charge of
compiling all installed .py files.
The *.pyc generation on a per package basis is disabled in the
python-package infrastructure by passing the "--no-compile" option to
setup.py.
The *.pyc generation for the Python interpreter internal modules is
disabled through --disable-pyc-build configure option.
A small helper script is used to perform the compilation, the purpose
of this script is to abort the compilation process if one of the .py
file cannot be compiled. It has been provided by Samuel Martin and
integrated into this commit.
Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com>
Cc: Samuel Martin <s.martin49@gmail.com>
[Thomas:
- rework for python 3.5
- integrate Samuel proposal that allows to detect compilation
failures.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The graph-build-time help text currently looks like this:
usage: graph-build-time [-h] [--type GRAPH_TYPE] [--order GRAPH_ORDER]
[--alternate-colors] [--input OUTPUT] --output OUTPUT
Obviously, naming the parameter for --input as OUTPUT is not a very
good idea, so this commit fixes that to name it "INPUT", as expected.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
When preparing the legal-info, the source archives are copied in the
legal-info/ output directory. When the archives are big, it can take
quite a bit of time and unnecessarily uses disk space. When the
legal-info output directory is on the same filesystem as the BR2_DL_DIR,
we can easily reduce copy time and disk usage by just using hardlins
instead of copying. However, the BR2_DL_DIR may be on a different
filesystem, so we must fallback to copying in this case
Introduce a helper script that copies a source file into a destination
directory, by first attempting to hard-link, and falling back to a
plain copy in case the hardlink fails.
In case the destination already exists, it is forcibly removed first, to
avoid clobering any existing target file (and especially any hardlink to
it), since cp -f does not remove the destination file, but clobbers it.
In some situations, it will be necessary that the destination file is
named differently than the source, so if a third argument is specified,
it is treated as the basename of the destination file.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
At least syslinux is installing stuff in HOST_DIR/sbin.
Cc: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Make graph-depends script opening the output file in text mode since
only ascii characters will be written.
This change fixes the following error occuring when the default host
python interpreter is python3:
make: Entering directory '/opt/buildroot'
Getting targets
Getting dependencies for ['toolchain-external', 'toolchain', 'busybox', ...]
Getting dependencies for ['host-python3', 'host-pkgconf', 'host-gettext', ...]
Getting dependencies for ['host-libxml2', 'host-swig', 'host-m4', ...]
Getting version for ['toolchain-external', 'toolchain', 'busybox', ...]
Traceback (most recent call last):
File "/opt/buildroot/support/scripts/graph-depends", line 425, in <module>
outfile.write("digraph G {\n")
TypeError: a bytes-like object is required, not 'str'
Makefile:807: recipe for target 'graph-depends' failed
make[1]: *** [graph-depends] Error 1
Makefile:84: recipe for target '_all' failed
make: *** [_all] Error 2
make: Leaving directory '/opt/buildroot'
While with python2, adding 'b' to the openning mode has no effect on
Linux (c.f. [2]), the above error is expected with python3 (c.f. [1]).
Therefore, just open the outfile in default (i.e. text) mode.
[1] https://docs.python.org/3/library/functions.html#open
[2] https://docs.python.org/2/library/functions.html#open
Signed-off-by: Samuel Martin <s.martin49@gmail.com>
Acked-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, these flags are recursively propagated. This behavior is
not expected by users, because it can cause dependencies explosively.
Signed-off-by: Francois Perrad <francois.perrad@gadz.org>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently, without the flag -recommend, scancpan takes as dependency
only one which has the relationship "requires"; this mode works fine.
And, with the flag -recommend, scancpan takes all ones (ie. with
relationship "requires" or "recommends") in the same way; this mode
never works fine, because it is too simplistic.
With this commit, the "not required" dependencies are handled as
optional BR package or skipped if a cyclic dependency is detected.
Signed-off-by: Francois Perrad <francois.perrad@gadz.org>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The way we use it, gzip will store the current time in the header, which
leads to unreproducible archives.
Fix that by telling gzip to not store the name and date of the file it
compresses, with the -n option. Since it compresses its stdin, there was
already no filename stored; now there's even no date stored.
Note: gzip has had -n since at least 1.2.4, released in 1993, so
virtually every gzip out there nowadays has it.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Just like the --stop-on and --exclude options allow to stop on or
exclude virtual packages from the list by passing the "virtual" magic
value, this commit extends the graph-depends logic to support a "host"
magic value for --stop-on and --exclude. This will allow to draw the
graph by stopping on host packages, or by excluding host packages.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
[Thomas: minor code beautification suggested by Yann E. Morin.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The condition to determine if a virtual package should be excluded
from the list due to "virtual" being passed in --exclude is under a
loop iterating over each entry of the exclude_list, but it doesn't use
the iterator of this list.
Indeed, the condition contains:
"virtual" in exclude_list
which checks automatically if "virtual" was passed in the list. Due to
this, there is no need for this check to be within the "for p in
exclude_list" iteration. This commit fixes that by moving the check
outside of the loop.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Add an option to graph-depends to only do the dependency checks and not
generate the dot program.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, if there is a circular dependency in the packages, the
graph-depends script just errors out with a Python RuntimeError which is
not caught, resulting in a very-long backtrace which does not provide
any hint as what the real issue is (even if "RuntimeError: maximum
recursion depth exceeded" is a pretty good hint at it).
We fix that by recursing the dependency chain of each package, until we
either end up with a package with no dependency, or with a package
already seen along the current dependency chain.
We need to introduce a new function, check_circular_deps(), because we
can't re-use the existing ones:
- remove_mandatory_deps() does not iterate,
- remove_transitive_deps() does iterate, but we do not call it for the
top-level package if it is not 'all'
- it does not make sense to use those functions anyway, as they were
not designed to _check_ but to _act_ on the dependency chain.
Since we've had time-related issues in the past, we do not want to
introduce yet another time-hog, so here are timings with the circular
dependency check:
$ time python -m cProfile -s cumtime support/scripts/graph-depends
[...]
28352654 function calls (20323050 primitive calls) in 87.292 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.012 0.012 87.292 87.292 graph-depends:24(<module>)
21 0.000 0.000 73.685 3.509 subprocess.py:473(_eintr_retry_call)
7 0.000 0.000 73.655 10.522 subprocess.py:768(communicate)
7 73.653 10.522 73.653 10.522 {method 'read' of 'file' objects}
5/1 0.027 0.005 43.488 43.488 graph-depends:164(get_all_depends)
5 0.003 0.001 43.458 8.692 graph-depends:135(get_depends)
1 0.001 0.001 25.712 25.712 graph-depends:98(get_version)
1 0.001 0.001 13.457 13.457 graph-depends:337(remove_extra_deps)
1717 1.672 0.001 13.050 0.008 graph-depends:290(remove_transitive_deps)
9784086/2672326 5.079 0.000 11.363 0.000 graph-depends:274(is_dep)
2883343/1980154 2.650 0.000 6.942 0.000 graph-depends:262(is_dep_uncached)
1 0.000 0.000 4.529 4.529 graph-depends:121(get_targets)
2883343 1.123 0.000 1.851 0.000 graph-depends:246(is_dep_cache_insert)
9784086 1.783 0.000 1.783 0.000 graph-depends:255(is_dep_cache_lookup)
2881580 0.728 0.000 0.728 0.000 {method 'update' of 'dict' objects}
1 0.001 0.001 0.405 0.405 graph-depends:311(check_circular_deps)
12264/1717 0.290 0.000 0.404 0.000 graph-depends:312(recurse)
[...]
real 1m27.371s
user 1m15.075s
sys 0m12.673s
The cumulative time spent in check_circular_deps is just below 0.5s,
which is largely less than 1% of the total run time.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, graph-depends outputs the dotfile program to stdout, and uses
stderr to trace the dependencies it is currently looking for.
Redirection was done because the output was directly piped into the dot
program to generate the final PDF/SVG/... dependency graph, but that
meant that an error in the graph-depends script was never caught
(because shell pipes only return the final command exit status, and an
empty dot program is perfectly valid so dot would not complain).
Add an option to tell graph-depends where to store the generated dot
program, and keep stdout as the default if not specified.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Samuel Martin <s.martin49@gmail.com>
[Thomas: rename metavar from DOT_FILE to OUT_FILE for consistency with
the rest of the new option naming.]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Leverage the CSV files produces by size-stats (make graph-size) to allow
for a comparison of rootfs size between two different buildroot
compilations.
The script takes the file-size CSV files of two compilations as input, and
produces a textual report of the differences per package.
Using the -d/--detail flag, the report will show the file size changes
instead of package size changes.
The -t/--threshold option allows to ignore file size differences smaller
or equal than the given threshold (in bytes).
Example output is:
Size difference per package (bytes), threshold = 0
--------------------------------------------------------------------------------
-8192 busybox
228572 added dmalloc
301584 added jq
--------------------------------------------------------------------------------
521964 TOTAL
or with detailed view:
Size difference per file (bytes), threshold = 0
--------------------------------------------------------------------------------
-8192 bin/busybox
18152 added usr/bin/jq
39252 added usr/bin/dmalloc
46968 added usr/lib/libdmalloc.so
47288 added usr/lib/libdmallocxx.so
47316 added usr/lib/libdmallocth.so
47748 added usr/lib/libdmallocthcxx.so
283432 added usr/lib/libjq.so.1.0.4
--------------------------------------------------------------------------------
521964 TOTAL
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This patch adds a Vagrant file to buildroot. With this file
you can provision a complete buildroot developing environment
in minutes on all major platforms (Linux/Mac/Windows).
[Peter: bump to 2GB RAM, hardcode Buildroot release, add unzip,
drop website update and tweak manual text as suggested by Yann]
Signed-off-by: Angelo Compagnucci <angelo.compagnucci@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In some cases, the toolchain sources are now recovered and available in
the legal-info output. So, adapt the header to use conditional instead
of an definitive negation.
Also update the part about saving the sources: it's not the license list
that defines whether sources are installed, but rather whether the
package is redistributable or not.
Update the header accordingly.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Luca Ceresoli <luca@lucaceresoli.net>
Acked-by: Luca Ceresoli <luca@lucaceresoli.net>
Acked-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
It hasn't been updated since it was added in 2008, and nowadays things kind
of stuff should be handled with genimage.
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Creating temporary files in /tmp (or the path pointed by $TMPDIR) allows the
buildroot top directory to be read-only and shareable between multible builds.
This follows what other scripts do, e.g. check-kernel-headers.sh.
Signed-off-by: Henrique Marks <henrique.marks@datacom.ind.br>
Signed-off-by: Carlos Santos <casantos@datacom.ind.br>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Allows user to specify other access methods than :pserver:anonymous@
on CVS repositories. This shall be defined in the <pkg>_SITE variable.
[Thomas:
- as suggested by Yann, quote the variable expansion
- as suggested by Yann, use a regexp match
- tweak commit log]
Signed-off-by: Joao Mano <joao@datacom.ind.br>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
In efe7f68 (support/download: generate reproducible Bazaar archives),
bzr was instructed to store files with the timestamp set to the date
they were last modified in the repository, instead of the current date,
using the --per-file-timestamp option.
However, this option has been added only in bzr-2.2 (August 2010) which
is not available on older distros.
We fix that by not using --per-file-timestamp when the bzr version is
older than 2.2, in which case we just generate the archive with the
current date set on files.
This means the archive is thus non-reproducible, and if a hash is
available for that archive, the hash will not match, and Buildroot will
try to download from the mirror (if any) or fail (if no mirror).
Fixes:
http://autobuild.buildroot.org/results/51f/51f4ff5462c15a85937d411f457096224d00fdcdhttp://autobuild.buildroot.org/results/b88/b8828b5fbc16128408c2f44169ac23de7e34d770http://autobuild.buildroot.org/results/fb4/fb4b0fb2131b40c18273dbe5e51b393cb6df18ec
...
[Peter: simplify sed invocation]
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The apply-patches.sh script was using a mix of tabs and spaces, and
some three-space indentation. Normalize everything to four-space
indentation.
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
A copy/paste error in the ArgumentParser() constructor call disclosed
the fact that the author of the script has shamefully based his work
on the existing graph-build-time script. This commit fixes this
mistake, therefore hiding in a better way how size-stats was
vampirized from graph-build-time.
Reported-by: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Similarly to what has previously been done for the Hg download backend,
instruct bzr to generate the archive on stdout, so that we can generate
reproducible archives.
When instructing bzr to generate the output file by itself, it uses a
temporary file that is then fed to gzip, which in turn stores the
timestamp of that file in the generated archive, whereas when the output
is generated on stdout, there is no timestamp, so the archive is then
reproducible.
Bizarely enough, we can tell 'bazaar' not to generate a bazaar in the
archive. Cool, uh? ;-]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When hg directly creates the output file, the hash for that file changes
everytime.
However, if we just tell hg to output the archive on stdout and we do
the redirect to the file, then the archive is reproducible.
(The reason is that in the first case, a temporary file is created and
then compressed, and gzip is adding the filename and its timestamp in
the gzip header, while in the second case, there is no temporary file,
and thus no timestamp and thus it is reproducible.)
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Yegor Yefremov <yegorslists@googlemail.com>
Tested-by: Yegor Yefremov <yegorslists@googlemail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
For large configurations, the execution time of
remove_transitive_deps() becomes really high, due to the number of
nested loops + the is_dep() function being recursive.
For an allyespackageconfig, the remove_extra_deps() function takes 334
seconds to execute, and the overall time to generate the .dot file is
6 minutes and 39 seconds. Here is a timing of the different
graph-depends steps and the overall execution time:
Getting dependencies: 42.5735 seconds
Turn deps into a dict: 0.0023 seconds
Remove extra deps: 334.1542 seconds
Get version: 22.4919 seconds
Generate .dot: 0.0197 seconds
real 6m39.289s
user 6m16.644s
sys 0m8.792s
By adding a very simple cache for the results of is_dep(), we bring
down the execution time of the "Remove extra deps" step from 334
seconds to just 4 seconds, reducing the overall execution time to 1
minutes and 10 seconds:
Getting dependencies: 42.9546 seconds
Turn deps into a dict: 0.0025 seconds
Remove extra deps: 4.9643 seconds
Get version: 22.1865 seconds
Generate .dot: 0.0207 seconds
real 1m10.201s
user 0m47.716s
sys 0m7.948s
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Tested-by: Gustavo Zacarias <gustavo.zacarias@free-electrons.com>
[yann.morin.1998@free.fr:
- rename is_dep() to is_dep_uncached(), keep existig code as-is
- add is_dep() as a cached-version of is_dep_uncached()
- use constructs more conform with 2to3
- use exceptions (EAFP) rather than check-before-use (LBYL) to be more
pythonist; that even decreases the duration yet a little bit more!
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Some users may provide custom download commands with spaces in their
arguments, like so:
BR2_HG="hg --config foo.bar='some space-separated value'"
However, the way we currently call those commands does not account
for the extra quotes, and each space-separated part of the command is
interpreted as separate arguments.
Fix that by calling 'eval' on the commands.
Because of the eval, we must further quote our own arguments, to avoid
the eval further splitting them in case there are spaces (even though
we do not support paths with spaces, better be clean from the onset to
avoid breakage in the future).
We change all the wrappers to use a wrapper-function, even those with
a single call, so they all look alike.
Note that we do not single-quote some of the variables, like ${verbose}
because it can be empty and we really do not want to generate an
empty-string argument. That's not a problem, as ${verbose} would not
normally contain space-separated values (it could get set to something
like '-q -v' but in that case we'd still want two arguments, so that's
fine).
Reported-by: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
A series file for quilt has a valid syntax of:
fixes/autoconf.diff -p1
fixes/doc-html-local-css.diff -p1
fixes/gnu-inline.diff -p1
However, with the current way that a series file is handled, it will
error out because the -p1 is tried as a file. This is because in the
for loop that iterates the files, we only look for comment lines. Then
each line is used within a bash for loop which uses spaces a
delimiter. In order to fix this, we should only use the string that
comes before a space in the series file.
Note that the format allows for any arbitrary depth to the -pN field.
But since we'll have only one package with -pN fields, and all will be
-p1, we for now always assume -p1. This will have to be fixed whenever
we get a package with other values.
Signed-off-by: Ryan Barnett <ryanbarnett3@gmail.com>
[yann.morin.1998@free.fr: expand comment about the format of a series
file and how we interpret it]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
CC: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Despite the comment saying so, the trailing '/' in the host directory is
not removed. Note however that it is properly removed from extracted
RPATH tags.
This is not visible when the host directory is our default $(O)/host
location, but breaks for user-supplied external host directory, when
the user leaves a trailing slash in the path.
Fix that.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When specifying BR2_LINUX_KERNEL_CUSTOM_REPO_VERSION, a user may want to
specify the SHA of a reference different than a branch or tag.
For instance, Gerrit stores the patchsets under refs/changes/xx/xxx, and
Github stores the pull requests under refs/pull/xxx/head.
When cloning a repository with --bare, you don't fetch these references.
This patch uses --mirror for a full clone, in order to give the user
access to all references of the Git repository.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: "Maxime Hadjinlian" <maxime.hadjinlian@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
When we build our host programs, and they depend on a host library we
also build, we want to ensure that program actually uses that library at
runtime, and not the one from the system.
We currently ensure that in two ways:
- we add a RPATH tag that points to our host library directory,
- we export LD_LIBRARY_PATH to point to that same directory.
With these two in place, we're pretty much confident that our host
libraries will be used by our host programs.
However, it turns our that not all the host programs we build end up
with an RPATH tag:
- some packages do not use our $(HOST_LDFLAGS)
- some packages' build system are oblivious to those LDFLAGS
In this case, there are two situations:
- the program is not linked to one of our host libraries: it in fact
does not need an RPATH tag [0]
- the program actually uses one of our host libraries: in that case it
should have had an RPATH tag pointing to the host directory.
For libraries, they only need an RPATH if they depend on another library
that is not installed in the standard library path. However, any system
library will already be in the standard library path, and any library we
install ourselves is in $(HOST_DIR)/usr/lib so already in RPATH.
We add a new support script that checks that all ELF executables have
a proper DT_RPATH (or DT_RUNPATH) tag when they link to our host
libraries, and reports those file that are missing an RPATH. If a file
missing an RPATH is an executable, the script aborts; if only libraries
are are missing an RPATH, the script does not abort.
[0] Except if it were to dlopen() it, of course, but the only program
I'm aware of that does that is openssl, and it has a correct RPATH tag.
[Peter: reworded as suggested by Arnout, fix HOT_DIR typo in comment]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When a series file exists, we should use every file mentioned in it,
not just the ones ending with .patch or .diff. Also, there's no need
to uncompress anything if it's mentioned in a series file (the tools
that manipulate series files don't support compressed patches).
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Cc: Doug Kehn <rdkehn@yahoo.com>
Tested-by: Doug Kehn <rdkehn@yahoo.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
'quiet' variable is set and exported, but it is not used. We can safely
remove it.
This variable is inherited from the Makefile of the Linux kernel, and
is not used in Buildroot.
In support/scripts/mkmakefile, 'quiet' value is checked, but the test
is always true ('quiet' is never set to silent_), so the test can be
removed as well.
Signed-off-by: Cédric Marie <cedric.marie@openmailbox.org>
Reviewed-by: "James Knight" <james.d.knight@live.com>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
add this heuristic when no specific license file is found
Signed-off-by: Francois Perrad <francois.perrad@gadz.org>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>