Skip to content
This repository has been archived by the owner on Oct 22, 2024. It is now read-only.

build: use debian:buster as base images, non-reproducible builds #751

Merged
merged 1 commit into from
Sep 29, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 42 additions & 63 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,69 +1,46 @@
# CLEARLINUX_BASE and SWUPD_UPDATE_ARG can be used to make the build reproducible
# by choosing an image by its hash and updating to a certain version with -V:
# CLEAR_LINUX_BASE=clearlinux@sha256:b8e5d3b2576eb6d868f8d52e401f678c873264d349e469637f98ee2adf7b33d4
# SWUPD_UPDATE_ARG=-V 29970
#
# This is used on release branches before tagging a stable version. The master and devel
# branches default to using the latest Clear Linux.
ARG CLEAR_LINUX_BASE=clearlinux:latest
ARG SWUPD_UPDATE_ARG=
# Image builds are not reproducible because the base layer is changing over time.
ARG LINUX_BASE=debian:buster-slim

# Common base image for building PMEM-CSI:
# - up-to-date Clear Linux
# - ndctl installed
FROM ${CLEAR_LINUX_BASE} AS build
ARG CLEAR_LINUX_BASE
ARG SWUPD_UPDATE_ARG
# Common base image for building PMEM-CSI and running CI tests.
FROM ${LINUX_BASE} AS build
ARG APT_GET="env DEBIAN_FRONTEND=noninteractive apt-get"

ARG NDCTL_VERSION="68"
ARG NDCTL_CONFIGFLAGS="--disable-docs --without-systemd --without-bash"
ARG NDCTL_BUILD_DEPS="os-core-dev devpkg-util-linux devpkg-kmod devpkg-json-c"
ARG GO_VERSION="1.13.4"

#pull dependencies required for downloading and building libndctl
# CACHEBUST is set by the CI when building releases to ensure that apt-get really gets
# run instead of just using some older, cached result.
ARG CACHEBUST
RUN echo "Updating build image from ${CLEAR_LINUX_BASE} to ${SWUPD_UPDATE_ARG:-the latest release}."
RUN swupd update ${SWUPD_UPDATE_ARG} && swupd bundle-add ${NDCTL_BUILD_DEPS} c-basic && rm -rf /var/lib/swupd /var/tmp/swupd

# In contrast to the runtime image below, here we can afford to install additional
# tools and recommended packages. But this image gets pushed to a registry by the CI as a cache,
# so it still makes sense to keep this layer small by removing /var/cache.
RUN ${APT_GET} update && \
${APT_GET} install -y gcc libndctl-dev make git curl iproute2 pkg-config xfsprogs e2fsprogs parted openssh-client python3 python3-venv && \
rm -rf /var/cache/*
RUN curl -L https://dl.google.com/go/go${GO_VERSION}.linux-amd64.tar.gz | tar -zxf - -C / && \
mkdir -p /usr/local/bin/ && \
for i in /go/bin/*; do ln -s $i /usr/local/bin/; done

WORKDIR /
RUN curl --fail --location --remote-name https://github.com/pmem/ndctl/archive/v${NDCTL_VERSION}.tar.gz
RUN tar zxvf v${NDCTL_VERSION}.tar.gz && mv ndctl-${NDCTL_VERSION} ndctl
WORKDIR /ndctl
RUN ./autogen.sh
# We install into /usr/local (keeps content separate from OS) but
# then symlink the .pc files to ensure that they are found without
# having to set PKG_CONFIG_PATH. The .pc file doesn't contain an -rpath
# and thus linked binaries do not find the shared libs unless we
# also symlink those.
RUN ./configure --prefix=/usr/local ${NDCTL_CONFIGFLAGS}
RUN make install && \
ln -s /usr/local/lib/pkgconfig/libndctl.pc /usr/lib64/pkgconfig/ && \
ln -s /usr/local/lib/pkgconfig/libdaxctl.pc /usr/lib64/pkgconfig/ && \
for i in /usr/local/lib/lib*.so.*; do ln -s $i /usr/lib64; done

# The source archive has no license file. We link to the copy in GitHub instead.
RUN echo "For source code and licensing of ndctl, see https://github.com/pmem/ndctl/blob/v${NDCTL_VERSION}/COPYING" >/usr/local/lib/NDCTL.COPYING

# Clean image for deploying PMEM-CSI.
FROM ${CLEAR_LINUX_BASE} as runtime
ARG CLEAR_LINUX_BASE
ARG SWUPD_UPDATE_ARG
FROM ${LINUX_BASE} as runtime
ARG APT_GET="env DEBIAN_FRONTEND=noninteractive apt-get"
ARG CACHEBUST
ARG BIN_SUFFIX
LABEL maintainers="Intel"
LABEL description="PMEM CSI Driver"

# update and install needed bundles:
# Update and install the minimal amount of additional packages that
# are needed at runtime:
# file - driver uses file utility to determine filesystem type
# xfsprogs - XFS filesystem utilities
# storge-utils - for lvm2 and ext4(e2fsprogs) utilities
ARG CACHEBUST
RUN echo "Updating runtime image from ${CLEAR_LINUX_BASE} to ${SWUPD_UPDATE_ARG:-the latest release}."
RUN swupd update ${SWUPD_UPDATE_ARG} && swupd bundle-add file xfsprogs storage-utils \
$(if [ "$BIN_SUFFIX" = "-test" ]; then echo fio; fi) && \
rm -rf /var/lib/swupd /var/tmp/swupd
# xfsprogs, e2fsprogs - formating filesystems
# lvm2 - volume management
# ndctl - pulls in the necessary library, useful by itself
# fio - only included in testing images
RUN ${APT_GET} update && \
${APT_GET} upgrade -y --no-install-recommends && \
${APT_GET} install -y --no-install-recommends file xfsprogs e2fsprogs lvm2 ndctl \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, now on we no longer building ndctl from the source.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. It makes our bill of materials and Dockerfile simpler. I don't think we ever really needed the ability to build a newer version than the one offered by the distro, and as ndctl is even more mature now I don't expect that either.

$(if [ "$BIN_SUFFIX" = "-test" ]; then echo fio; fi) && \
rm -rf /var/cache/*

# Image in which PMEM-CSI binaries get built.
FROM build as binaries
Expand Down Expand Up @@ -116,21 +93,23 @@ RUN set -x && \
FROM runtime as pmem

# Move required binaries and libraries to clean container.
# All of our custom content is in /usr/local.
COPY --from=binaries /usr/local/lib/libndctl.so.* /usr/local/lib/
COPY --from=binaries /usr/local/lib/libdaxctl.so.* /usr/local/lib/
# We need to overwrite the system libs, hence -f here.
RUN for i in /usr/local/lib/lib*.so.*; do ln -fs $i /usr/lib64; done
COPY --from=binaries /usr/local/bin/pmem-* /usr/local/bin/
COPY --from=binaries /usr/local/share/package-licenses /usr/local/share/package-licenses
COPY --from=binaries /usr/local/share/package-sources /usr/local/share/package-sources
COPY --from=binaries /usr/local/lib/NDCTL.COPYING /usr/local/share/package-licenses/
# default lvm config uses lvmetad and throwing below warning for all lvm tools
# WARNING: Failed to connect to lvmetad. Falling back to device scanning.
# So, ask lvm not to use lvmetad
RUN mkdir -p /etc/lvm
RUN echo "global { use_lvmetad = 0 }" >> /etc/lvm/lvm.conf && \
echo "activation { udev_sync = 0 udev_rules = 0 }" >> /etc/lvm/lvm.conf

# Don't rely on udevd, it isn't available (https://unix.stackexchange.com/questions/591724/how-to-add-a-block-to-udev-database-that-works-after-reboot).
# Same with D-Bus.
# Backup and archival of metadata inside the container is useless.
RUN sed -i \
-e 's/udev_sync = 1/udev_sync = 0/' \
-e 's/udev_rules = 1/udev_rules = 0/' \
-e 's/obtain_device_list_from_udev = 1/obtain_device_list_from_udev = 0/' \
-e 's/multipath_component_detection = 1/multipath_component_detection = 0/' \
-e 's/md_component_detection = 1/md_component_detection = 0/' \
-e 's/notify_dbus = 1/notify_dbus = 0/' \
-e 's/backup = 1/backup = 0/' \
-e 's/archive = 1/archive = 0/' \
/etc/lvm/lvm.conf

ENV LD_LIBRARY_PATH=/usr/lib
# By default container runs with non-root user
Expand Down
61 changes: 10 additions & 51 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ pipeline {
// Set below via a script, must *not* be set here as it can't be overwritten.
// BUILD_TARGET = ""

// CACHEBUST is passed when building images to ensure that the base layer gets
// updated when building releases.
// CACHEBUST = ""

// This image is pulled at the beginning and used as cache.
// TODO: Here we use "canary" which is correct for the "devel" branch, but other
// branches may need something else to get better caching.
Expand All @@ -77,6 +81,8 @@ pipeline {

withDockerRegistry([ credentialsId: "${env.DOCKER_REGISTRY}", url: "https://${REGISTRY_NAME}" ]) {
script {
env.CACHEBUST = ""

// Despite its name, GIT_LOCAL_BRANCH contains the tag name when building a tag.
// At some point it also contained the branch name when building
// a branch, but not anymore, therefore we fall back to BRANCH_NAME
Expand All @@ -85,6 +91,7 @@ pipeline {
// then we have GIT_BRANCH.
if (env.GIT_LOCAL_BRANCH != null) {
env.BUILD_TARGET = env.GIT_LOCAL_BRANCH
env.CACHEBUST = env.GIT_LOCAL_BRANCH
} else if ( env.BRANCH_NAME != null ) {
env.BUILD_TARGET = env.BRANCH_NAME
} else {
Expand All @@ -96,19 +103,9 @@ pipeline {
// Pull previous image and use it as cache (https://andrewlock.net/caching-docker-layers-on-serverless-build-hosts-with-multi-stage-builds---target,-and---cache-from/).
sh ( script: "docker image pull ${env.BUILD_IMAGE} || true")
sh ( script: "docker image pull ${env.PMEM_CSI_IMAGE} || true")

// PR jobs need to use the same CACHEBUST value as the latest build for their
// target branch, otherwise they cannot reuse the cached layers. Another advantage
// is that they use a version of Clear Linux that is known to work, because "swupd update"
// will be cached.
env.CACHEBUST = sh ( script: "docker inspect -f '{{ .Config.Labels.cachebust }}' ${env.BUILD_IMAGE} 2>/dev/null || true", returnStdout: true).trim()
} else {
env.BUILD_IMAGE = "${env.REGISTRY_NAME}/pmem-clearlinux-builder:${env.BRANCH_NAME}-rejected"
}

if (env.CACHEBUST == null || env.CACHEBUST == "") {
env.CACHEBUST = env.BUILD_ID
}
}
sh "env; echo Building BUILD_IMAGE=${env.BUILD_IMAGE} for BUILD_TARGET=${env.BUILD_TARGET}, CHANGE_ID=${env.CHANGE_ID}, CACHEBUST=${env.CACHEBUST}."
sh "docker build --cache-from ${env.BUILD_IMAGE} --label cachebust=${env.CACHEBUST} --target build --build-arg CACHEBUST=${env.CACHEBUST} -t ${env.BUILD_IMAGE} ."
Expand All @@ -118,29 +115,6 @@ pipeline {
}
}

stage('update base image') {
// Update the base image before doing a full build + test cycle. If that works,
// we push the new commits to GitHub.
when { environment name: 'JOB_BASE_NAME', value: 'pmem-csi-release' }

steps {
script {
status = sh ( script: "${RunInBuilder()} ${env.BUILD_CONTAINER} hack/create-new-release.sh", returnStatus: true )
if ( status == 2 ) {
// https://stackoverflow.com/questions/42667600/abort-current-build-from-pipeline-in-jenkins
currentBuild.result = 'ABORTED'
error('No new release, aborting...')
}
if ( status != 0 ) {
error("Creating a new release failed.")
}
}
// We must ensure that the workers use the same modified source code.
// This relies on create-new-release.sh producing just a single commit.
sh "git format-patch -n1 --stdout >_work/release.patch"
}
}

stage('docsite') {
steps {
sh "${RunInBuilder()} ${env.BUILD_CONTAINER} env GITHUB_SHA=${GIT_COMMIT} GITHUB_REPOSITORY=${SourceRepo()} make vhtml"
Expand All @@ -166,7 +140,7 @@ pipeline {
steps {
// This builds images for REGISTRY_NAME with the version automatically determined by
// the make rules.
sh "${RunInBuilder()} ${env.BUILD_CONTAINER} make build-images"
sh "${RunInBuilder()} ${env.BUILD_CONTAINER} make build-images CACHEBUST=${env.CACHEBUST}"

// For testing we have to have those same images also in a registry. Tag and push for
// localhost, which is the default test registry.
Expand Down Expand Up @@ -195,7 +169,6 @@ pipeline {
lz4 > _work/images.tar.lz4 && \
ls -l -h _work/images.tar.lz4"
stash includes: '_work/images.tar.lz4', name: 'images'
stash includes: '_work/release.patch', name: 'release', allowEmpty: true
}
}

Expand Down Expand Up @@ -307,7 +280,7 @@ git push origin HEAD:master
sh "imageversion=\$(${RunInBuilder()} ${env.BUILD_CONTAINER} make print-image-version) && \
expectedversion=\$(echo '${env.BUILD_TARGET}' | sed -e 's/devel/canary/') && \
if [ \"\$imageversion\" = \"\$expectedversion\" ] ; then \
${RunInBuilder()} ${env.BUILD_CONTAINER} make push-images PUSH_IMAGE_DEP=; \
${RunInBuilder()} ${env.BUILD_CONTAINER} make push-images CACHEBUST=${env.CACHEBUST} PUSH_IMAGE_DEP=; \
else \
echo \"Skipping the pushing of PMEM-CSI driver images with version \$imageversion because this build is for ${env.BUILD_TARGET}.\"; \
fi"
Expand Down Expand Up @@ -337,7 +310,7 @@ git push origin HEAD:master
String RunInBuilder() {
"\
docker exec \
-e BUILD_IMAGE_ID=${env.CACHEBUST} \
-e CACHEBUST=${env.CACHEBUST} \
-e 'BUILD_ARGS=--cache-from ${env.BUILD_IMAGE} --cache-from ${env.PMEM_CSI_IMAGE}' \
-e DOCKER_CONFIG=${WORKSPACE}/_work/docker-config \
-e REGISTRY_NAME=${env.REGISTRY_NAME} \
Expand Down Expand Up @@ -425,15 +398,6 @@ void PrepareEnv() {
timeout=\$((timeout + 10)); \
done"

// Install additional tools:
// - ssh client for govm
// - python3 for Sphinx (i.e. make html)
// - parted, xfsprogs, os-cloudguest-aws (contains mkfs.ext4) for ImageFile test
sh "docker exec ${env.BUILD_CONTAINER} swupd bundle-add openssh-client python3-basic parted xfsprogs os-cloudguest-aws"

// Now commit those changes to ensure that the result of "swupd bundle add" gets cached.
sh "docker commit ${env.BUILD_CONTAINER} ${env.BUILD_IMAGE}"

// Make /usr/local/bin writable for all users. Used to install kubectl.
sh "docker exec ${env.BUILD_CONTAINER} sh -c 'mkdir -p /usr/local/bin && chmod a+wx /usr/local/bin'"

Expand All @@ -459,11 +423,6 @@ void RestoreEnv() {
unstash 'images'
sh 'lz4cat _work/images.tar.lz4 | docker load'

// In case of a release update, also apply the same source code patch.
// Does not exist during normal PR testing.
unstash 'release'
sh 'if [ -f _work/release.patch ]; then git am _work/release.patch; fi'

// Set up build container and registry.
PrepareEnv()

Expand Down
12 changes: 3 additions & 9 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -79,22 +79,16 @@ $(CMDS): check-go-version-$(GO_BINARY)
$(TEST_CMDS): %-test: check-go-version-$(GO_BINARY)
$(GO) test --cover -covermode=atomic -c -coverpkg=./pkg/... -ldflags '-X github.com/intel/pmem-csi/pkg/$*.version=${VERSION}' -o ${OUTPUT_DIR}/$@ ./cmd/$*

# The default is to refresh the base image once a day when building repeatedly.
# This is achieved by passing a fake variable that changes its value once per day.
# A CI system that produces production images should instead use
# `make BUILD_IMAGE_ID=<some unique number>`.
#
# At the moment this build ID is not recorded in the resulting images.
# The VERSION variable should be used for that, if desired.
BUILD_IMAGE_ID?=$(shell date +%Y-%m-%d)
# Set by the CI to ensure that image building really pulls a new base.
CACHEBUST=

# Build and publish images for production or testing (i.e. with test binaries).
# Pushing images also automatically rebuilds the image first. This can be disabled
# with `make push-images PUSH_IMAGE_DEP=`.
build-images: build-image build-test-image
push-images: push-image push-test-image
build-image build-test-image: build%-image: populate-vendor-dir
docker build --pull --build-arg CACHEBUST=$(BUILD_IMAGE_ID) --build-arg GOFLAGS=-mod=vendor --build-arg BIN_SUFFIX=$(findstring -test,$*) $(BUILD_ARGS) -t $(IMAGE_TAG) -f ./Dockerfile . --label revision=$(VERSION)
docker build --pull --build-arg CACHEBUST=$(CACHEBUST) --build-arg GOFLAGS=-mod=vendor --build-arg BIN_SUFFIX=$(findstring -test,$*) $(BUILD_ARGS) -t $(IMAGE_TAG) -f ./Dockerfile . --label revision=$(VERSION)
PUSH_IMAGE_DEP = build%-image
# "docker push" has been seen to fail temporarily with "error creating overlay mount to /var/lib/docker/overlay2/xxx/merged: device or resource busy".
# Here we simply try three times before giving up.
Expand Down
19 changes: 7 additions & 12 deletions docs/DEVELOPMENT.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,9 +82,9 @@ Nonetheless, input needs to be validated to catch mistakes:
### Branching

The `master` branch is the main branch. It is guaranteed to have
passed full CI testing. However, it always uses the latest Clear Linux
for building container images, so changes in Clear Linux can break the
building of older revisions.
passed full CI testing. However, the Dockerfile uses whatever is
the latest upstream content for the base distribution and therefore
tests results are not perfectly reproducible.

The `devel` branch contains additional commits on top of `master`
which might not have been tested in that combination yet. Therefore it
Expand All @@ -107,21 +107,17 @@ that. This will block updating `master` and thus needs to be dealt
quickly.

Releases are created by branching `release-x.y` from `master` or some
older, stable revision. On that new branch, the base image is locked
onto a certain Clear Linux version with the
`hack/update-clear-linux-base.sh` script. Those `release-x.y` branches
are then fully reproducible. The actual `vx.y.z` release tags are set
older, stable revision. The actual `vx.y.z` release tags are set
on revisions in the corresponding `release-x.y` branch.

Releases and the corresponding images are never changed. If something
goes wrong after setting a tag (like detecting a bug while testing the
release images), a new release is created.

Container images reference a fixed base image. To ensure that the base
image remains secure, `hack/update-clear-linux-base.sh` gets run
periodically to update a `release-x.y` branch and a new release with
`z` increased by one is created. Other bug fixed might be added to
that release by merging into the branch.
image remains secure, it gets scanned for known vulnerabilities regularly
and a new release is prepared manually if needed. The new release then
uses a newer base image.

### Tagging

Expand All @@ -140,7 +136,6 @@ Jenkinsfile ensures that.
### Release checklist

* Create a new `release-x.y` branch.
* Run `hack/update-clear-linux-base.sh`.
* Run `hack/set-version.sh vx.y.z` and commit the modified files.
* Push to `origin`.
* [Create a draft
Expand Down
51 changes: 0 additions & 51 deletions hack/create-new-release.sh

This file was deleted.

Loading