Fixes:
http://autobuild.buildroot.net/results/12a/12a63ae177fe3ed0c9a1ef2fa01870f334f36b0f/
Currently, when the post-process helper fails while downloading from
upstream, there is no fallback to the backup mirror.
In case the post-process helper fails, we must consider that to be a
download failure, so we must bail out as if the download backend itself
did fail, but we fail to do so.
Duplicate the logic we have for the download helper: if the post-process
helper fails, remove the downloaded stuff, and continue on to the next
URI, which will ultimately hit the backup mirror (if one has been
configured).
Reported-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In order to support package managers such as Cargo (Rust) or Go, we
want to run some custom logic after the main download, but before
packing the tarball and checking the hash.
To implement this, this commit introduces a concept of download
post-processing: if -p <something> is passed to the dl-wrapper, then
support/download/<something>-post-process will be called.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
[yann.morin.1998@free.fr:
- double-quote variable expansion when calling post-process script
]
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Git Large File Storage replaces large files with text pointers in the
Git repository while storing the contents on a remote server. If a
repository is using this extension, then git-lfs must be used to
checkout the large files before the source archive is generated.
Signed-off-by: John Keeping <john@metanate.com>
[vfazio:
- add git-lfs to DL_TOOLS_DEPENDENCIES
- fixup for 5a0d681394
("infra/pkg-download: make the DOWNLOAD macro fully parameterised")
]
Signed-off-by: Vincent Fazio <vfazio@xes-inc.com>
[Arnout:
- don't "git lfs install";
- recurse into submodules.
]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Add Secure File Transfer Program (SFTP) support using a simple wrapper.
SFTP is a common protocol used to transfer files securely between
enterprises, but it is not currently supported in Buildroot because all
of the packages are usually available via HTTP, git or some other
download method.
SFTP is similar to FTP but it preforms all operations over an encrypted
SSH transport using a specific protocol. This is unlike ftps, which is
traditional FTP over an SSL/TLS connection.
Signed-off-by: Thomas Preston <thomas.preston@codethink.co.uk>
Signed-off-by: Michael Drake <michael.drake@codethink.co.uk>
[Arnout:
- update documentation with sftp everywhere scp is mentioned;
- rename "verbose" variable to "quiet";
- print the sftp command, similar to wget and scp helpers.
]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Since commit 38de434123 ("download: fix file:// BR2_PRIMARY_SITE
(download cache)"), the urlencode option is no longer passed to the
download backend, because we use ${backend} instead of
${backend_urlencode}.
We must get the urlencode information from backend_urlencode.
Signed-off-by: Damien Thébault <damien.thebault@vitec.com>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
[Thomas: rework commit log]
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
wget is the only downloader currently usable with BR2_PRIMARY_SITE, and that
doesn't work at all for file:// URLs. The symptoms are these:
support/download/dl-wrapper -c '2.4.47' -d '/PATH/build/sw/source/attr' -D '/PATH/build/sw/source' -f 'attr-2.4.47.src.tar.gz' -H 'package/attr//attr.hash' -n 'attr-2.4.47' -N 'attr' -o '/PATH/build/sw/source/attr/attr-2.4.47.src.tar.gz' -u file\|urlencode+file:///NFS/buildroot_dl_cache/attr -u file\|urlencode+file:///NFS/buildroot_dl_cache -u http+http://download.savannah.gnu.org/releases/attr -u http\|urlencode+http://sources.buildroot.net/attr -u http\|urlencode+http://sources.buildroot.net --
file:///NFS/buildroot_dl_cache/attr/attr-2.4.47.src.tar.gz: Unsupported scheme `file'.
ERROR: attr-2.4.47.src.tar.gz has wrong sha256 hash:
ERROR: expected: 25772f653ac5b2e3ceeb89df50e4688891e21f723c460636548971652af0a859
ERROR: got : e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
ERROR: Incomplete download, or man-in-the-middle (MITM) attack
In the case of custom Linux kernel versions, this is fatal, because there isn't
necessarily a hash file to indicate that wget's empty tarball is wrong.
This seems to have been broken by commit c8ef0c03b0, because:
1. BR2_PRIMARY_SITE always appends "urlencode" (package/pkg-download.mk)
2. Anything with the "|urlencode" suffix in $uri will end up using wget due to
the backend case wildcarding.
3. The wget backend rejects file:/// URLs ("unsupported scheme"), and we end up
with an empty .tar.gz file in the downloads directory.
Fix that by shell-extracting the backend name from the left of "|". I'm not
positive if all URLs will have a "|", so this code only looks for a "|" left of
the "+".
Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The download wrapper is a purely internal helper, and is not supposed to
be callable manually. No need to offer some help.
Besides, the help text was way out-dated.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
'+' is a valid character in a url. The current dl-wrapper gets the
URI scheme by dropping everything after the last '+' character, with
the intension of finding 'git' from e.g. 'git+https://uri'.
If a uri has a '+' anywhere in it, it ends up using too much of the
string as a scheme, and fails to match the handler properly.
An example of where this form of URI is used is when using deploy tokens
in gitlab. It uses a form like https://<username>:<password>@gitlab.com/<group>/<repo.git>
where username for deploy token is of the form 'gitlab+deploy-token-<number>'.
Use the %% operator to search backwards until the last '+' character when
dropping the rest of the string as we know that the first '+'
in the string should be the scheme.
Signed-off-by: Robert Beckett <bbeckett@netvu.org.uk>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
${raw_name} is never defined in dl-wrapper, and therefore the value
passed to the -N option is always empty. This causes a problem for the
'cvs' backend, which uses the value of this option as the CVS module
to be downloaded.
If the name of the CVS module is omitted, all the CVS modules from
that CVS repository are downloaded, which creates a tarball with a lot
more contents, and the actual useful contents in a sub-directory,
obviously breaking patches that should be applied, and the entire
build process that follows.
Fixes:
http://autobuild.buildroot.net/results/fcee0e3d7eeeb373313b1794092c729b1b052348/
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When calling the backend-specific helper scripts, the remaining
options are in ${@}. However, in order to let the helper script know
that those remaining options should not be parsed, but instead passed
as-is to the download tool, they must be separated from the main
options by "--".
Without this, packages that use <pkg>_DL_OPTS, such as the
amd-catalyst package, cannot download their tarball anymore.
Fixes:
http://autobuild.buildroot.net/results/de818f6e4c8e63d5e8a49c445d10c34eccc40410/
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Tested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
When the BR2_DL_DIR is a mountpoint (presumably shared between various
machine, or mounted from the local host when running in a VM), it is
possible that it does not support hardlinks (e.g. samba, or the VMWare
VMFS, etc...).
If the hardlink fails, fallback to copying the file. As a last resort,
if that also fails, eventually fallback to doing the download.
Note: this means that the dl-wrapper is no longer atomic-safe: the code
suffers of a TOCTTOU condition: the file may be created in-between the
check and the moment we try to ln/cp it. Fortunately, the dl-wrapper is
now run under an flock, so we're still safe. If we eventually go for a
more fine-grained implementation, we'll have to be careful then.
Reported-by: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
For existing setups, the global donload directory may have a lot of the
required archives, so look into there before attempting a download.
We simply hard-link them if found there and not in the new per-package
loaction. Then we resume the existing procedure (which means the new
hardlink will get removed if it happened to not match the hash).
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The infrastructure needs to give the 'dl_dir' to the dl-wrapper which in its
turn needs to give it to the helper. It will only be used by the 'git'
helper as of now.
Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The goal here is to simplify the infrastructure by putting most of the
code in the dl-wrapper as it is easier to implement and to read.
Most of the functions were common already, this patch finalizes it by
making the pkg-download.mk pass all the parameters needed to the
dl-wrapper which in turn will pass everything to every backend.
The backend will then cherry-pick what it needs from these arguments
and act accordingly.
It eases the transition to the addition of a sub directory per package
in the DL_DIR, and later on, a git cache.
[Peter: drop ';' in BR_NO_CHECK_HASH_FOR in DOWNLOAD macro and swap cd/rm
-rf as mentioned by Yann, fix typos]
Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Tested-by: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: Luca Ceresoli <luca@lucaceresoli.net>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently all download helpers accepts the local output file, the remote
locations, the changesets and so on... as positional arguments.
This was well and nice when that's was all we needed.
But then we added an option to quiesce their verbosity, and that was
shoehorned with a trivial getopts, still keeping all the existing
positional arguments as... positional arguments.
Adding yet more options while keeping positional arguments will not be
very easy, even if we do not envision any new option in the foreseeable
future (but 640K ought to be enough for everyone, remember? ;-) ).
Change all helpers to accept a set of generic options (-q for quiet and
-o for the output file) as well as helper-specific options (like -r for
the repository, -c for a changeset...).
Maxime:
Changed -R to -r for recurse (only for the git backend)
Changed -r to -u for URI (for all backend)
Change -R to -c for cset (for CVS and SVN backend)
Add the export of the BR_BACKEND_DL_GETOPTS so all the backend wrapper
can use the same option easily
Now all the backends use the same common options.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
In the situation where the hash is missing from the hash file, the
dl-wrapper downloads the file again and again until the developer
specifies the hash to complete the download step.
To avoid this situation, the freshly-downloaded file is not removed
anymore after a successful download.
After this change, the behaviour is as follows:
- Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR
=> always succeeds.
- Hash file exists, but file is not present
=> file is NOT removed, build is terminated immediately (i.e.
secondary site is not tried).
- Hash file exists, file is present, but hash mismatch
=> file is removed, secondary site is tried.
=> If all primary/secondary site downloads or hash checks fail, the
build is terminated.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
[Arnout: extend commit log]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Add a new package variable that packages can set to specify that they
need git submodules.
Only accept this option if the download method is git, as we can not get
submodules via an http download (via wget).
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Aleksandar Simeonov <aleksandar@barix.com>
Tested-by: Matt Weber <matt@thewebers.ws>
Reviewed-by: Matt Weber <matt@thewebers.ws>
Tested-By: Nicolas Cavallari <nicolas.cavallari@green-communications.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
When checking hashes reports no hash for a file, and this is treated as
an error (now: because BR2_ENFORCE_CHECK_HASH is set; later: because
that will be the new and only behaviour), exit promptly in error.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Currently, specifying a hash file for our download wrapper is mandatory.
However, when we download a git, svn, bzr, hg or cvs tree, there's by
design no hash to check the download against.
Since we're going to have hash checking mandatory when a hash file
exists, this would break those downloads from a repository.
So, make specifying a hash file optional when calling our download
wrapper and bail out early from the check-hash script if no hash file is
specified.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Samuel Martin <s.martin49@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
If doing a silent build (make -s -> QUIET=-q), silence all downloads,
by passing the -q flag downward to backends as well as to check-hash.
Change a printf to use the trace functions.
Signed-off-by: Fabio Porcedda <fabio.porcedda@gmail.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
In some cases, upstream just update their releases in-place, without
renaming them. When that package is updated in Buildroot, a new hash to
match the new upstream release is included in the corresponding .hash
file.
As a consequence, users who previously downloaded that package's tarball
with an older version of Buildroot, will get stuck with an old archive
for that package, and after updating their Buildroot copy, will be greeted
with a failed download, due to the local file not matching the new
hashes.
Also, an upstream would sometime serve us HTML garbage instead of the
actual tarball we requested, like SourceForge does from time for as-yet
unknown reasons.
So, to avoid this situation, check the hashes prior to doing the
download. If the hashes match, consider the locally cached file genuine,
and do not download it. However, if the locally cached file does not
match the known hashes we have for it, it is promptly removed, and a
download is re-attempted.
Note: this does not add any overhead compared to the previous situation,
because we were already checking hashes of locally cached files. It just
changes the order in which we do the checks. For the records, here is the
overhead of hashing a 231MiB file (qt-everywhere-opensource-src-4.8.6.tar.gz)
on a core-i5 @2.5GHz:
cache-cold cache-hot
sha1 1.914s 0.762s
sha256 2.109s 1.270s
But again, this overhead already existed before this patch.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of repeating the check in our download rules, delegate the check
of the hashes to the download wrapper.
This needs three different changes:
- add a new argument to the download wrapper, that is the full path to
the hash file; if the hash file does not exist, that does not change
the current behaviour, as the existence of the hash file is checked
for in the check-hash script;
- add a third argument to the check-hash script, to be the basename of
the file to check; this is required because we no longer check the
final file with the final filename, but an intermediate file with a
temporary filename;
- do the actual call to the check-hash script from within the download
wrapper.
This further paves the way to doing pre-download checks of the hashes
for the locally cached files.
Note: this patch removes the check for hashes for already downloaded
files, since the wrapper script exits early. The behaviour to check
localy cached files will be restored and enhanced in the following
patch.
[Thomas: fix minor typo in comment.]
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of repeating the same test again and again in all our download
rules, just delegate the check for an already downloaded file to the
download wrapper.
This clears up the path for doing the hash checks on a cached file
before the download.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Instead of relying on argument ordering, use actual options in the
download wrapper.
Download backends (bzr, cp, hg...) are left as-is, because it does not
make sense to complexify them, since they are almost very trivial shell
scripts, and adding option parsing would be really overkill.
This commit also renames the script to dl-wrapper so it looks better in
the traces, and it is not confused with another wrapper.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>