2014-12-11 23:52:05 +01:00
|
|
|
#!/usr/bin/env bash
|
|
|
|
|
|
|
|
# This script is a wrapper to the other download backends.
|
|
|
|
# Its role is to ensure atomicity when saving downloaded files
|
|
|
|
# back to BR2_DL_DIR, and not clutter BR2_DL_DIR with partial,
|
|
|
|
# failed downloads.
|
|
|
|
#
|
|
|
|
# Call it with -h to see some help.
|
|
|
|
|
|
|
|
# To avoid cluttering BR2_DL_DIR, we download to a trashable
|
|
|
|
# location, namely in $(BUILD_DIR).
|
|
|
|
# Then, we move the downloaded file to a temporary file in the
|
|
|
|
# same directory as the final output file.
|
|
|
|
# This allows us to finally atomically rename it to its final
|
|
|
|
# name.
|
|
|
|
# If anything goes wrong, we just remove all the temporaries
|
|
|
|
# created so far.
|
|
|
|
|
|
|
|
# We want to catch any unexpected failure, and exit immediately.
|
|
|
|
set -e
|
|
|
|
|
2018-04-02 16:58:01 +02:00
|
|
|
export BR_BACKEND_DL_GETOPTS=":hc:d:o:n:N:H:ru:qf:e"
|
2018-04-02 10:14:22 +02:00
|
|
|
|
2014-12-11 23:52:05 +01:00
|
|
|
main() {
|
|
|
|
local OPT OPTARG
|
support/download: keep files downloaded without hash
In the situation where the hash is missing from the hash file, the
dl-wrapper downloads the file again and again until the developer
specifies the hash to complete the download step.
To avoid this situation, the freshly-downloaded file is not removed
anymore after a successful download.
After this change, the behaviour is as follows:
- Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR
=> always succeeds.
- Hash file exists, but file is not present
=> file is NOT removed, build is terminated immediately (i.e.
secondary site is not tried).
- Hash file exists, file is present, but hash mismatch
=> file is removed, secondary site is tried.
=> If all primary/secondary site downloads or hash checks fail, the
build is terminated.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
[Arnout: extend commit log]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
2017-11-06 01:46:50 +01:00
|
|
|
local backend output hfile recurse quiet rc
|
2018-04-02 10:14:23 +02:00
|
|
|
local -a uris
|
2014-12-11 23:52:05 +01:00
|
|
|
|
|
|
|
# Parse our options; anything after '--' is for the backend
|
2018-04-02 17:13:58 +02:00
|
|
|
while getopts ":hc:d:D:o:n:N:H:rf:u:q" OPT; do
|
2014-12-11 23:52:05 +01:00
|
|
|
case "${OPT}" in
|
|
|
|
h) help; exit 0;;
|
2018-04-02 10:14:23 +02:00
|
|
|
c) cset="${OPTARG}";;
|
2018-04-02 16:58:01 +02:00
|
|
|
d) dl_dir="${OPTARG}";;
|
2018-04-02 17:13:58 +02:00
|
|
|
D) old_dl_dir="${OPTARG}";;
|
2014-12-11 23:52:05 +01:00
|
|
|
o) output="${OPTARG}";;
|
2018-04-02 10:14:23 +02:00
|
|
|
n) raw_base_name="${OPTARG}";;
|
|
|
|
N) base_name="${OPTARG}";;
|
2014-12-11 23:52:07 +01:00
|
|
|
H) hfile="${OPTARG}";;
|
2016-07-01 11:01:19 +02:00
|
|
|
r) recurse="-r";;
|
2018-04-02 10:14:23 +02:00
|
|
|
f) filename="${OPTARG}";;
|
|
|
|
u) uris+=( "${OPTARG}" );;
|
2015-01-02 16:53:40 +01:00
|
|
|
q) quiet="-q";;
|
2014-12-11 23:52:05 +01:00
|
|
|
:) error "option '%s' expects a mandatory argument\n" "${OPTARG}";;
|
|
|
|
\?) error "unknown option '%s'\n" "${OPTARG}";;
|
|
|
|
esac
|
|
|
|
done
|
2018-04-02 10:14:23 +02:00
|
|
|
|
2014-12-11 23:52:05 +01:00
|
|
|
# Forget our options, and keep only those for the backend
|
|
|
|
shift $((OPTIND-1))
|
|
|
|
|
|
|
|
if [ -z "${output}" ]; then
|
|
|
|
error "no output specified, use -o\n"
|
|
|
|
fi
|
|
|
|
|
2018-04-02 17:13:58 +02:00
|
|
|
# Legacy handling: check if the file already exists in the global
|
|
|
|
# download directory. If it does, hard-link it. If it turns out it
|
|
|
|
# was an incorrect download, we'd still check it below anyway.
|
core/download: fix when the BR2_DL_DIR does not accept hardlinks
When the BR2_DL_DIR is a mountpoint (presumably shared between various
machine, or mounted from the local host when running in a VM), it is
possible that it does not support hardlinks (e.g. samba, or the VMWare
VMFS, etc...).
If the hardlink fails, fallback to copying the file. As a last resort,
if that also fails, eventually fallback to doing the download.
Note: this means that the dl-wrapper is no longer atomic-safe: the code
suffers of a TOCTTOU condition: the file may be created in-between the
check and the moment we try to ln/cp it. Fortunately, the dl-wrapper is
now run under an flock, so we're still safe. If we eventually go for a
more fine-grained implementation, we'll have to be careful then.
Reported-by: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
2018-04-03 20:54:56 +02:00
|
|
|
# If we can neither link nor copy, fallback to doing a download.
|
|
|
|
# NOTE! This is not atomic, is subject to TOCTTOU, but the whole
|
|
|
|
# dl-wrapper runs under an flock, so we're safe.
|
2018-04-02 17:13:58 +02:00
|
|
|
if [ ! -e "${output}" -a -e "${old_dl_dir}/${filename}" ]; then
|
core/download: fix when the BR2_DL_DIR does not accept hardlinks
When the BR2_DL_DIR is a mountpoint (presumably shared between various
machine, or mounted from the local host when running in a VM), it is
possible that it does not support hardlinks (e.g. samba, or the VMWare
VMFS, etc...).
If the hardlink fails, fallback to copying the file. As a last resort,
if that also fails, eventually fallback to doing the download.
Note: this means that the dl-wrapper is no longer atomic-safe: the code
suffers of a TOCTTOU condition: the file may be created in-between the
check and the moment we try to ln/cp it. Fortunately, the dl-wrapper is
now run under an flock, so we're still safe. If we eventually go for a
more fine-grained implementation, we'll have to be careful then.
Reported-by: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Maxime Hadjinlian <maxime.hadjinlian@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
2018-04-03 20:54:56 +02:00
|
|
|
ln "${old_dl_dir}/${filename}" "${output}" || \
|
|
|
|
cp "${old_dl_dir}/${filename}" "${output}" || \
|
|
|
|
true
|
2018-04-02 17:13:58 +02:00
|
|
|
fi
|
|
|
|
|
2015-04-01 00:15:05 +02:00
|
|
|
# If the output file already exists and:
|
|
|
|
# - there's no .hash file: do not download it again and exit promptly
|
|
|
|
# - matches all its hashes: do not download it again and exit promptly
|
|
|
|
# - fails at least one of its hashes: force a re-download
|
|
|
|
# - there's no hash (but a .hash file): consider it a hard error
|
2014-12-11 23:52:06 +01:00
|
|
|
if [ -e "${output}" ]; then
|
2015-01-02 16:53:40 +01:00
|
|
|
if support/download/check-hash ${quiet} "${hfile}" "${output}" "${output##*/}"; then
|
pkg-download: check hashes for locally cached files
In some cases, upstream just update their releases in-place, without
renaming them. When that package is updated in Buildroot, a new hash to
match the new upstream release is included in the corresponding .hash
file.
As a consequence, users who previously downloaded that package's tarball
with an older version of Buildroot, will get stuck with an old archive
for that package, and after updating their Buildroot copy, will be greeted
with a failed download, due to the local file not matching the new
hashes.
Also, an upstream would sometime serve us HTML garbage instead of the
actual tarball we requested, like SourceForge does from time for as-yet
unknown reasons.
So, to avoid this situation, check the hashes prior to doing the
download. If the hashes match, consider the locally cached file genuine,
and do not download it. However, if the locally cached file does not
match the known hashes we have for it, it is promptly removed, and a
download is re-attempted.
Note: this does not add any overhead compared to the previous situation,
because we were already checking hashes of locally cached files. It just
changes the order in which we do the checks. For the records, here is the
overhead of hashing a 231MiB file (qt-everywhere-opensource-src-4.8.6.tar.gz)
on a core-i5 @2.5GHz:
cache-cold cache-hot
sha1 1.914s 0.762s
sha256 2.109s 1.270s
But again, this overhead already existed before this patch.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
2014-12-11 23:52:08 +01:00
|
|
|
exit 0
|
2015-04-01 00:15:05 +02:00
|
|
|
elif [ ${?} -ne 2 ]; then
|
|
|
|
# Do not remove the file, otherwise it might get re-downloaded
|
|
|
|
# from a later location (i.e. primary -> upstream -> mirror).
|
|
|
|
# Do not print a message, check-hash already did.
|
|
|
|
exit 1
|
pkg-download: check hashes for locally cached files
In some cases, upstream just update their releases in-place, without
renaming them. When that package is updated in Buildroot, a new hash to
match the new upstream release is included in the corresponding .hash
file.
As a consequence, users who previously downloaded that package's tarball
with an older version of Buildroot, will get stuck with an old archive
for that package, and after updating their Buildroot copy, will be greeted
with a failed download, due to the local file not matching the new
hashes.
Also, an upstream would sometime serve us HTML garbage instead of the
actual tarball we requested, like SourceForge does from time for as-yet
unknown reasons.
So, to avoid this situation, check the hashes prior to doing the
download. If the hashes match, consider the locally cached file genuine,
and do not download it. However, if the locally cached file does not
match the known hashes we have for it, it is promptly removed, and a
download is re-attempted.
Note: this does not add any overhead compared to the previous situation,
because we were already checking hashes of locally cached files. It just
changes the order in which we do the checks. For the records, here is the
overhead of hashing a 231MiB file (qt-everywhere-opensource-src-4.8.6.tar.gz)
on a core-i5 @2.5GHz:
cache-cold cache-hot
sha1 1.914s 0.762s
sha256 2.109s 1.270s
But again, this overhead already existed before this patch.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Peter Korsgaard <jacmet@uclibc.org>
Cc: Gustavo Zacarias <gustavo@zacarias.com.ar>
Reviewed-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
2014-12-11 23:52:08 +01:00
|
|
|
fi
|
|
|
|
rm -f "${output}"
|
2015-01-02 16:53:40 +01:00
|
|
|
warn "Re-downloading '%s'...\n" "${output##*/}"
|
2014-12-11 23:52:06 +01:00
|
|
|
fi
|
|
|
|
|
2018-04-02 10:14:23 +02:00
|
|
|
# Look through all the uris that we were given to download the package
|
|
|
|
# source
|
|
|
|
download_and_check=0
|
|
|
|
rc=1
|
|
|
|
for uri in "${uris[@]}"; do
|
2018-06-04 15:32:30 +02:00
|
|
|
backend=${uri%%+*}
|
2018-04-02 10:14:23 +02:00
|
|
|
case "${backend}" in
|
|
|
|
git|svn|cvs|bzr|file|scp|hg) ;;
|
|
|
|
*) backend="wget" ;;
|
|
|
|
esac
|
|
|
|
uri=${uri#*+}
|
|
|
|
|
|
|
|
urlencode=${backend#*|}
|
|
|
|
# urlencode must be "urlencode"
|
|
|
|
[ "${urlencode}" != "urlencode" ] && urlencode=""
|
|
|
|
|
|
|
|
# tmpd is a temporary directory in which backends may store
|
|
|
|
# intermediate by-products of the download.
|
|
|
|
# tmpf is the file in which the backends should put the downloaded
|
|
|
|
# content.
|
|
|
|
# tmpd is located in $(BUILD_DIR), so as not to clutter the (precious)
|
|
|
|
# $(BR2_DL_DIR)
|
|
|
|
# We let the backends create tmpf, so they are able to set whatever
|
|
|
|
# permission bits they want (although we're only really interested in
|
|
|
|
# the executable bit.)
|
|
|
|
tmpd="$(mktemp -d "${BUILD_DIR}/.${output##*/}.XXXXXX")"
|
|
|
|
tmpf="${tmpd}/output"
|
|
|
|
|
|
|
|
# Helpers expect to run in a directory that is *really* trashable, so
|
|
|
|
# they are free to create whatever files and/or sub-dirs they might need.
|
|
|
|
# Doing the 'cd' here rather than in all backends is easier.
|
|
|
|
cd "${tmpd}"
|
|
|
|
|
|
|
|
# If the backend fails, we can just remove the content of the temporary
|
|
|
|
# directory to remove all the cruft it may have left behind, and try
|
|
|
|
# the next URI until it succeeds. Once out of URI to try, we need to
|
|
|
|
# cleanup and exit.
|
|
|
|
if ! "${OLDPWD}/support/download/${backend}" \
|
|
|
|
$([ -n "${urlencode}" ] && printf %s '-e') \
|
|
|
|
-c "${cset}" \
|
2018-04-02 16:58:01 +02:00
|
|
|
-d "${dl_dir}" \
|
2018-04-02 10:14:23 +02:00
|
|
|
-n "${raw_base_name}" \
|
2018-04-11 10:10:24 +02:00
|
|
|
-N "${base_name}" \
|
2018-04-02 10:14:23 +02:00
|
|
|
-f "${filename}" \
|
|
|
|
-u "${uri}" \
|
|
|
|
-o "${tmpf}" \
|
2018-04-11 09:31:21 +02:00
|
|
|
${quiet} ${recurse} -- "${@}"
|
2018-04-02 10:14:23 +02:00
|
|
|
then
|
|
|
|
# cd back to keep path coherence
|
|
|
|
cd "${OLDPWD}"
|
support/download: keep files downloaded without hash
In the situation where the hash is missing from the hash file, the
dl-wrapper downloads the file again and again until the developer
specifies the hash to complete the download step.
To avoid this situation, the freshly-downloaded file is not removed
anymore after a successful download.
After this change, the behaviour is as follows:
- Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR
=> always succeeds.
- Hash file exists, but file is not present
=> file is NOT removed, build is terminated immediately (i.e.
secondary site is not tried).
- Hash file exists, file is present, but hash mismatch
=> file is removed, secondary site is tried.
=> If all primary/secondary site downloads or hash checks fail, the
build is terminated.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
[Arnout: extend commit log]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
2017-11-06 01:46:50 +01:00
|
|
|
rm -rf "${tmpd}"
|
2018-04-02 10:14:23 +02:00
|
|
|
continue
|
|
|
|
fi
|
|
|
|
|
|
|
|
# cd back to free the temp-dir, so we can remove it later
|
|
|
|
cd "${OLDPWD}"
|
|
|
|
|
|
|
|
# Check if the downloaded file is sane, and matches the stored hashes
|
|
|
|
# for that file
|
|
|
|
if support/download/check-hash ${quiet} "${hfile}" "${tmpf}" "${output##*/}"; then
|
|
|
|
rc=0
|
|
|
|
else
|
|
|
|
if [ ${?} -ne 3 ]; then
|
|
|
|
rm -rf "${tmpd}"
|
|
|
|
continue
|
|
|
|
fi
|
|
|
|
|
|
|
|
# the hash file exists and there was no hash to check the file
|
|
|
|
# against
|
|
|
|
rc=1
|
support/download: keep files downloaded without hash
In the situation where the hash is missing from the hash file, the
dl-wrapper downloads the file again and again until the developer
specifies the hash to complete the download step.
To avoid this situation, the freshly-downloaded file is not removed
anymore after a successful download.
After this change, the behaviour is as follows:
- Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR
=> always succeeds.
- Hash file exists, but file is not present
=> file is NOT removed, build is terminated immediately (i.e.
secondary site is not tried).
- Hash file exists, file is present, but hash mismatch
=> file is removed, secondary site is tried.
=> If all primary/secondary site downloads or hash checks fail, the
build is terminated.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
[Arnout: extend commit log]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
2017-11-06 01:46:50 +01:00
|
|
|
fi
|
2018-04-02 10:14:23 +02:00
|
|
|
download_and_check=1
|
|
|
|
break
|
|
|
|
done
|
support/download: keep files downloaded without hash
In the situation where the hash is missing from the hash file, the
dl-wrapper downloads the file again and again until the developer
specifies the hash to complete the download step.
To avoid this situation, the freshly-downloaded file is not removed
anymore after a successful download.
After this change, the behaviour is as follows:
- Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR
=> always succeeds.
- Hash file exists, but file is not present
=> file is NOT removed, build is terminated immediately (i.e.
secondary site is not tried).
- Hash file exists, file is present, but hash mismatch
=> file is removed, secondary site is tried.
=> If all primary/secondary site downloads or hash checks fail, the
build is terminated.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
[Arnout: extend commit log]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
2017-11-06 01:46:50 +01:00
|
|
|
|
2018-04-02 10:14:23 +02:00
|
|
|
# We tried every URI possible, none seems to work or to check against the
|
|
|
|
# available hash. *ABORT MISSION*
|
|
|
|
if [ "${download_and_check}" -eq 0 ]; then
|
|
|
|
rm -rf "${tmpd}"
|
|
|
|
exit 1
|
2014-12-11 23:52:07 +01:00
|
|
|
fi
|
|
|
|
|
2014-12-11 23:52:05 +01:00
|
|
|
# tmp_output is in the same directory as the final output, so we can
|
|
|
|
# later move it atomically.
|
|
|
|
tmp_output="$(mktemp "${output}.XXXXXX")"
|
|
|
|
|
|
|
|
# 'mktemp' creates files with 'go=-rwx', so the files are not accessible
|
|
|
|
# to users other than the one doing the download (and root, of course).
|
|
|
|
# This can be problematic when a shared BR2_DL_DIR is used by different
|
|
|
|
# users (e.g. on a build server), where all users may write to the shared
|
|
|
|
# location, since other users would not be allowed to read the files
|
|
|
|
# another user downloaded.
|
|
|
|
# So, we restore the 'go' access rights to a more sensible value, while
|
|
|
|
# still abiding by the current user's umask. We must do that before the
|
|
|
|
# final 'mv', so just do it now.
|
|
|
|
# Some backends (cp and scp) may create executable files, so we need to
|
|
|
|
# carry the executable bit if needed.
|
|
|
|
[ -x "${tmpf}" ] && new_mode=755 || new_mode=644
|
|
|
|
new_mode=$(printf "%04o" $((0${new_mode} & ~0$(umask))))
|
|
|
|
chmod ${new_mode} "${tmp_output}"
|
|
|
|
|
|
|
|
# We must *not* unlink tmp_output, otherwise there is a small window
|
|
|
|
# during which another download process may create the same tmp_output
|
|
|
|
# name (very, very unlikely; but not impossible.)
|
|
|
|
# Using 'cp' is not reliable, since 'cp' may unlink the destination file
|
|
|
|
# if it is unable to open it with O_WRONLY|O_TRUNC; see:
|
|
|
|
# http://pubs.opengroup.org/onlinepubs/9699919799/utilities/cp.html
|
|
|
|
# Since the destination filesystem can be anything, it might not support
|
|
|
|
# O_TRUNC, so 'cp' would unlink it first.
|
|
|
|
# Use 'cat' and append-redirection '>>' to save to the final location,
|
|
|
|
# since that is the only way we can be 100% sure of the behaviour.
|
|
|
|
if ! cat "${tmpf}" >>"${tmp_output}"; then
|
|
|
|
rm -rf "${tmpd}" "${tmp_output}"
|
|
|
|
exit 1
|
|
|
|
fi
|
|
|
|
rm -rf "${tmpd}"
|
|
|
|
|
|
|
|
# tmp_output and output are on the same filesystem, so POSIX guarantees
|
|
|
|
# that 'mv' is atomic, because it then uses rename() that POSIX mandates
|
|
|
|
# to be atomic, see:
|
|
|
|
# http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html
|
|
|
|
if ! mv -f "${tmp_output}" "${output}"; then
|
|
|
|
rm -f "${tmp_output}"
|
|
|
|
exit 1
|
|
|
|
fi
|
support/download: keep files downloaded without hash
In the situation where the hash is missing from the hash file, the
dl-wrapper downloads the file again and again until the developer
specifies the hash to complete the download step.
To avoid this situation, the freshly-downloaded file is not removed
anymore after a successful download.
After this change, the behaviour is as follows:
- Hash file doesn't exist, or file is in BR_NO_CHECK_HASH_FOR
=> always succeeds.
- Hash file exists, but file is not present
=> file is NOT removed, build is terminated immediately (i.e.
secondary site is not tried).
- Hash file exists, file is present, but hash mismatch
=> file is removed, secondary site is tried.
=> If all primary/secondary site downloads or hash checks fail, the
build is terminated.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
[Arnout: extend commit log]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
2017-11-06 01:46:50 +01:00
|
|
|
|
|
|
|
return ${rc}
|
2014-12-11 23:52:05 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
help() {
|
|
|
|
cat <<_EOF_
|
|
|
|
NAME
|
|
|
|
${my_name} - download wrapper for Buildroot
|
|
|
|
|
|
|
|
SYNOPSIS
|
|
|
|
${my_name} [OPTION]... -- [BACKEND OPTION]...
|
|
|
|
|
|
|
|
DESCRIPTION
|
|
|
|
Wrapper script around different download mechanisms. Ensures that
|
|
|
|
concurrent downloads do not conflict, that partial downloads are
|
|
|
|
properly evicted without leaving temporary files, and that access
|
|
|
|
rights are maintained.
|
|
|
|
|
|
|
|
-h This help text.
|
|
|
|
|
2018-04-02 10:14:23 +02:00
|
|
|
-u URIs
|
|
|
|
The URI to get the file from, the URI must respect the format given in
|
|
|
|
the example.
|
|
|
|
You may give as many '-u URI' as you want, the script will stop at the
|
|
|
|
frist successful download.
|
|
|
|
|
|
|
|
Example: backend+URI; git+http://example.com or http+http://example.com
|
2014-12-11 23:52:05 +01:00
|
|
|
|
|
|
|
-o FILE
|
|
|
|
Store the downloaded archive in FILE.
|
|
|
|
|
2014-12-11 23:52:07 +01:00
|
|
|
-H FILE
|
|
|
|
Use FILE to read hashes from, and check them against the downloaded
|
|
|
|
archive.
|
|
|
|
|
2014-12-11 23:52:05 +01:00
|
|
|
Exit status:
|
|
|
|
0 if OK
|
|
|
|
!0 in case of error
|
|
|
|
|
|
|
|
ENVIRONMENT
|
|
|
|
|
|
|
|
BUILD_DIR
|
|
|
|
The path to Buildroot's build dir
|
|
|
|
_EOF_
|
|
|
|
}
|
|
|
|
|
|
|
|
trace() { local msg="${1}"; shift; printf "%s: ${msg}" "${my_name}" "${@}"; }
|
|
|
|
warn() { trace "${@}" >&2; }
|
|
|
|
errorN() { local ret="${1}"; shift; warn "${@}"; exit ${ret}; }
|
|
|
|
error() { errorN 1 "${@}"; }
|
|
|
|
|
|
|
|
my_name="${0##*/}"
|
|
|
|
main "${@}"
|