In some cases, upstream just update their releases in-place, without
renaming them. When that package is updated in Buildroot, a new hash to
match the new upstream release is included in the corresponding .hash
file.
As a consequence, users who previously downloaded that package's tarball
with an older version of Buildroot, will get stuck with an old archive
for that package, and after updating their Buildroot copy, will be greeted
with a failed download, due to the local file not matching the new
hashes.
Also, an upstream would sometime serve us HTML garbage instead of the
actual tarball we requested, like SourceForge does from time for as-yet
unknown reasons.
So, to avoid this situation, check the hashes prior to doing the
download. If the hashes match, consider the locally cached file genuine,
and do not download it. However, if the locally cached file does not
match the known hashes we have for it, it is promptly removed, and a
download is re-attempted.
Note: this does not add any overhead compared to the previous situation,
because we were already checking hashes of locally cached files. It just
changes the order in which we do the checks. For the records, here is the
overhead of hashing a 231MiB file (qt-everywhere-opensource-src-4.8.6.tar.gz)
on a core-i5 @2.5GHz:
cache-cold cache-hot
sha1 1.914s 0.762s
sha256 2.109s 1.270s
But again, this overhead already existed before this patch.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of repeating the check in our download rules, delegate the check
of the hashes to the download wrapper.
This needs three different changes:
- add a new argument to the download wrapper, that is the full path to
the hash file; if the hash file does not exist, that does not change
the current behaviour, as the existence of the hash file is checked
for in the check-hash script;
- add a third argument to the check-hash script, to be the basename of
the file to check; this is required because we no longer check the
final file with the final filename, but an intermediate file with a
temporary filename;
- do the actual call to the check-hash script from within the download
wrapper.
This further paves the way to doing pre-download checks of the hashes
for the locally cached files.
Note: this patch removes the check for hashes for already downloaded
files, since the wrapper script exits early. The behaviour to check
localy cached files will be restored and enhanced in the following
patch.
[Thomas: fix minor typo in comment.]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of repeating the same test again and again in all our download
rules, just delegate the check for an already downloaded file to the
download wrapper.
This clears up the path for doing the hash checks on a cached file
before the download.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of relying on argument ordering, use actual options in the
download wrapper.
Download backends (bzr, cp, hg...) are left as-is, because it does not
make sense to complexify them, since they are almost very trivial shell
scripts, and adding option parsing would be really overkill.
This commit also renames the script to dl-wrapper so it looks better in
the traces, and it is not confused with another wrapper.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The argument are correctly used, but incorrectly documented.
Inverse the comments to match the actual usage.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Not all systems have /bin/bash (e.g. NixOS[1] doesn't). Buildroot
already uses /usr/bin/env shebangs for other interpreters (perl,
python), so why not bash?
This changes only the shebangs used by Buildroot itself; stuff installed
to the target system is left unchanged.
With this applied I can run Buildroot unmodified on NixOS.
[1]: http://nixos.org/
Signed-off-by: Bjørn Forsman <bjorn.forsman@gmail.com>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The git helper uses gzip to compress the intermediate tarball. But gzip
removes the source file, and create a new file named by appending .gz to
the original file name.
Thus, we end up with output.gz, while the download wrapper expects jsut
output, and thus believes the downlaod failed.
Fix that by storing the tar from git to a temporary file, then pipe this
file to gzip's stdin, and redirect gzip's stdout to the output file.
Reported-by: Graham Newton <gnewton@peavey-eu.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Peter Seiderer <ps.report@gmx.net>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the wget helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make busybox-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the svn helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make open2300-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the scp helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by setting a primary site to 'scp://localhost:/tmp' and
running 'make vim-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the hg helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make vim-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the git helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by running 'make fmc-fsl-sdk-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the cvs helper, as it no longer has to deal
with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the localfiles helper, as it no longer has
to deal with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
(Tested by setting BUSYBOX_SITE = file:///tmp and running 'make busybox-source')
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
This drastically simplifies the bzr helper, as it no longer has to
deal with atomically saving the downloaded archive.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
The download wrapper is responsible for ensuring the atomicity
of saving into $(BR2_DL_DIR).
It calls the appropriate download helper, telling it to save the
downloaded content to a temporary file in $(BUILD_DIR) (so it does
not clutter $(BR2_DL_DIR) with partial, failed downloads.
Then, only if the download helper was successful, does the wrapper
save the downloaded content to the final location, yet still in a
temporary file, and finally atomically renames it to the final output
file.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
When switching the git helper over to a shell script, a special case was
not carried over: in case the remote has the required reference, we
attempt a shallow clone, using --depth 1. However, this is not supported
when the remote is accessed with the http protocol.
Therefore, the download fails.
What happened before the conversion to a shell script was that the helper
in the Makefile would fallback to doing a full-clone.
This is the case and behaviour that were lost in the conversion.
To avoid making the script too complex, we only attempt a full clone if
needed. And we decide that a full clone is needed by default; we decide
it is unnecessary if the remote has the needed reference *and* the
shallow clone was successful.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
bzr uses the name of the extension of the output file to known what
output format to use: tar, tgz, tar.bz2... If no extension is
recognised, bzr will output to a directory.
Since we use 'mktemp .XXXXXX' to generate temporary files, it obviously
never ends with a recognised extension. Thus, bzr expects the output to
be a directory, and fails since it is a file.
Fix that by forcing the output format.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Although md5 is, for legacy reasons, a supported hash type,
it is not documented on purpose, since it is now known to
be weak.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Baruch Siach <baruch@tkos.co.il>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Samuel Martin <s.martin49@gmail.com>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Reviewed-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Some of the packages that Buildroot might build are sensitive packages,
related to security: openssl, dropbear, ca-certificates...
Some of those packages are downloaded over plain http, because there is
no way to get them over a secure channel, such as https.
In these dark times of pervasive surveillance, the potential for harm that
a tampered-with package could generate, we may want to check the integrity
of those sensitive packages.
So, each package may now provide a list of hashes for all files that needs
to be downloaded, and Buildroot will just fail if any downloaded file does
not match its known hash, in which case it is removed.
Hashes can be any of the md5, sha1 or sha2 variants, and will be checked
even if the file was pre-downloaded.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Baruch Siach <baruch@tkos.co.il>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
DL_DIR can be a very precious place for some users: they use it to store
all the downloaded archives to share across all their Buildroot (and
maybe non-Buildroot) builds.
We do not want to trash this location with our temporary downloads (e.g.
git, Hg, svn, cvs repository clones/checkouts, or wget, bzr tep tarballs).
Turns out that we already have some kind of scratchpad, the BUILD_DIR.
Although it is not really a disposable location, that's the best we have
so far.
Also, we create the temporary tarballs with mktemp using the final tarball,
as template, since we want the temporary to be on the same filesystem as
the final location, so the 'mv' is just a plain, atomic rename(2), and we
are not left with a half-copied file as the final location.
Using mktemp ensures all temp file names are unique, so it allows for
parallel downloads from different build dirs at the same time, without
cloberring each downloads.
Note: we're using neither ${TMP} nor ${TMPDIR} since they are shared
locations, sometime with little place (eg. tmpfs), and some of the
repositories we clone/checkout can be very big.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Samuel Martin <s.martin49@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Thomas De Schampheleire <patrickdepinguin@gmail.com>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
[tested a particular scenario that used to fail: two separate builds
using a shared DL_DIR, ccache enabled, so that they run almost
synchronously. These would download the same file at the same time,
corrupting each other. With the patches in this series, all works
fine.]
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Tested-by: Thomas De Schampheleire <thomas.de.schampheleire@gmail.com>
[tested a particular scenario that used to fail, when the 'hg archive'
step is interrupted, now working fine]
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Maintaining the download helpers in the Makefile has proved to be a bit
complex, so move it to a shell script.
[Peter: redirect pushd/popd output to /dev/null]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Reviewed-by: Ryan Barnett <ryan.barnett@rockwellcollins.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The git download helper is getting a bit more complex. Fixing it in the
Makefile when it breaks (like the recent breakage with a non-existing
sha1-cset) proves to be challenging, to say the least.
Move it into a shell script in support/download/git, which will make
it much easier to read, maintain, fix and enhance in the future.
[Peter: redirect pushd/popd output to /dev/null]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Acked-by: Luca Ceresoli <luca@lucaceresoli.net>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>