Now that our pipelines are using the Docker image from the Gitlab
registry, there is no longer any reason to push the image to the
Docker hub.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
On a properly setup machine, it is totally useless to use sudo to run
docker; it is very bad practice. Instead, users really should add
themselves to the docker group.
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
Cc: Romain Naour <romain.naour@gmail.com>
Cc: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Reviewed-by: Romain Naour <romain.naour@gmail.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Starting with Qemu 6.1.0, gcc 7.5 is needed to build.
Since we build host-qemu package for qemu defconfig, we have to
upgrade to (at least) Debian buster that provide gcc 8 as host compiler.
While testing this upgrate, the test_edk2 failed since it actually
requires Qemu >= 4.1.0 to support arm SBSA reference machine [1].
Debian Buster only provide Qemu 3.1.
Finally, upgrade to Debian bullseye but it requires some linux
kernel version bump in several defconfigs since host gcc is based
on gcc-10 [2].
[1] https://git.qemu.org/?p=qemu.git;a=commit;h=64580903c2b3aee08d74d64e6248a313b246cb69
[2] http://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=621f2ded601546119fabccd1651b1ae29d26cd38
Signed-off-by: Romain Naour <romain.naour@gmail.com>
[Arnout: don't install python]
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Since commit 4a40d36f13
("support/testing: switch to Python 3 only") our runtime testing
infrastructure is Python 3.x only.
Therefore, it is no longer needed to have python-nose2 and
python-pexpect in the Docker container used to run our Gitlab CI jobs.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Signed-off-by: Yann E. MORIN <yann.morin.1998@free.fr>
support/scripts/pkg-stats now uses some Python 3.x only constructs
("async" and related keywords), so we must use the Python 3.x flake8.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
The test infra will soon be converted to Python 3 only.
So add the interpreter and also the Python 3 variant of modules nose2
and pexpect to the docker image used to run runtime tests.
Keep the Python 2 variant of those modules to allow a gradual
transition.
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Nicolas Carrier <nicolas.carrier@orolia.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Currently, we install flake8 and its dependencies via pip. We
tried to be reproducible by pinning the version of those python
packages, but we did forget quite a few of them, and thus some
dependencies for flake8 are installed as uncontrolled versions.
Furthermore, before we install flake8 and its dependencies, we
forcibly update pip, setuptools, and wheels packages to their
latest versions. This explicitly breaks reproducibility.
While we could enforce a specific version of all those packages
and still grab them from PyPI, we can simply grab them from the
distribution-provided packages instead.
Since we're using a pinned version of stretch, this already
guarantees we'll reproducibly get the same versions over and
over again. Besides, we just need to list flake8 as a package to
install to automatically get all its dependencies (again, in a
reproducible way).
This has the slight unfortunate drawback of downgrading flake8
to version 3.2.1, from version 3.5.0, as well as downgrading a
few of flake8's dependencies, as noticed by Ricardo:
http://lists.busybox.net/pipermail/buildroot/2018-May/222376.html
However, as Ricardo said, there isn't "any serious limitation of
this old version, the release notes for a version in the between
mentions 'Dramatically improve the performance' but we have a
limited number of scripts and running on Gitlab for all of them
still takes less than 5 minutes".
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Acked-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
As suggested in the docker best practices [0], order the package list
alphabetically, and list only one package per line.
This will be much usefull later, we need to update the list of installed
packages, like adding new ones for example.
[0] https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#sort-multi-line-arguments
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
In commit 7517aef4d (support/docker: limit the number of layers),
we reduced the number of layers by coalescing multiple RUN commands
into less commands.
In doing so, we especially coalesced "apt-get update" with "apt-get
install".
However, the distribution we used is a pinned version of stretch, so
we know that running apt-get update will always yield the same apt
database.
If we split the two apt-get commands, then we can re-use any local
intermediate image when we need to update the list of packages to
install; this helps quite a bit when testing the docker files over
and over again, with just slight variants in the packages list.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Cc: Peter Korsgaard <peter@korsgaard.com>
Reviewed-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Use the latest version of the tool because it is actively maintained.
But use a fixed version of the tool and its dependencies to get stable
results. It can be manually bumped from time to time.
Before installing any Python packages, ensure pip, setuptools, and wheel
are up to date as recommended in the docs [1].
[1] https://packaging.python.org/tutorials/installing-packages/
Signed-off-by: Ricardo Martincoski <ricardo.martincoski@gmail.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Yann E. MORIN <yann.morin.1998@free.fr>
Acked-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Some packages build C++ 32bits host-tools and need the g++-multilib to
be installed on the build machine. As example, qt5webengine builds a C++
host-tool when target is 32bits.
Add the check for g++-multilib to the dependencies script; and update
the Dockerfile to install g++-multilib package.
Signed-off-by: Gaël PORTAY <gael.portay@savoirfairelinux.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Currently, we refer to the latest version of the image, which means we
can't guarantee any reproducibility. Also, it measn we can't have a
separate images for the maintenance branches (especially the LTS) and
master.
Update the comment in the Dockerfile to create and push tagged images.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Since we're now using a specific base image tag, we need to also use a
specific, stable repository to get additional packages from for this
image.
As such, use the Debian snapshot that matches the base image.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently, we are using debian:stable, which is subject to change with
time, as new stable versions of Debian are released/updated.
Use the latest tagged stable release, stretch-20171210 as of today, as
the base distribution to use.
This will ease reproducible builds in the future.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
This image is not built very often, and when it is, it is important to
see what's going on, so don't be silent when installing packages from
the distro, and since that can take a bit of time it thus serves as
progress report...
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
The official documentation [0] suggests limiting the number of layers
generated from a dockerfile. A layer is created for each RUN (and COPY
and ADD) command. But we are only ever interested in the final image,
so the intermediate layers are useless to us.
Limit the number of RUN commands to limit the number of generated
layers.
[0] https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#minimize-the-number-of-layers
Reported-by: Peter Korsgaard <peter@korsgaard.com>
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Arnout Vandecappelle <arnout@mind.be>
Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
Currently, our jobs on the gitlab-ci infra are running as root, which is
problematic for two reasons:
- this is not the usual way Buildroot is built;
- it may miss issues where running as non-root is problematic.
So, complement our Dockerfile with directives to add a new user and run
everything as that user, as demonstrated by this build job:
https://gitlab.com/ymorin/buildroot-ci/-/jobs/46929562
Additional, enforce an UTF-8 locale while running.
Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Cc: Arnout Vandecappelle <arnout@mind.be>
Cc: Peter Korsgaard <peter@korsgaard.com>
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
For Gitlab-CI, we want to avoid re-generating the minimal install to
be able to run tests all the time. So let's create a docker image that
we can post on Docker Hub and then pull.
For the time being, this is just what we need for running our CI. Later
we can produce something that is also useful for users.
Signed-off-by: Arnout Vandecappelle (Essensium/Mind) <arnout@mind.be>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>