Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multistage build is causing deletion of files from kaniko run environment #450

Closed
mtsyganov opened this issue Nov 14, 2018 · 18 comments
Closed
Assignees
Labels
area/multi-stage builds issues related to kaniko multi-stage builds help wanted Looking for a volunteer! needs-reproduction norepro core team can't replicate this issue, require more info work-around-available

Comments

@mtsyganov
Copy link

Actual behavior
Before executor is called in the runtime of kaniko the /root/.docker/config.json file is created to allow kaniko access to private registries.
When multistage build is executed before the image of next stage is downloaded kaniko make deletes the filesystem. After the deletion of the file system also the /root/.docker/config.json file is deleted. As consequence the image of the third stage cant be downloaded form private registry anymore. In the 2 stage builds, after both stages are processed the resulting image cant be pushed to private registry because of missing credentials.

This problem affects #407 and was not completely fixed with !192 MR #192

Expected behavior
Deleting of filesystem is not deleting files from kaniko runtime environment at all.

To Reproduce
Steps to reproduce the behavior:

  1. Create dockerfile with 3 stage build and images from private registry
  2. Run kaniko in the container and before executor is called create /root/.docker/config.json file
  3. Call executor and wait till the third stage is executed. The third image cant be pulled from private registry

Additional Information
Dockerfile:

FROM $CI_REGISTRY_IMAGE:test2 as pg
FROM $CI_REGISTRY_IMAGE:test1 as go
FROM $CI_REGISTRY_IMAGE:latest

Build with gitlab ci pipeline:

build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
    - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
  only:
    - tags
@bobcatfish bobcatfish added the help wanted Looking for a volunteer! label Nov 16, 2018
@miguelitoq76
Copy link

miguelitoq76 commented Dec 4, 2018

@mtsyganov

please try this: the config files should be located in the kaniko/.docker folder, to perserve of deletion

Explanation: due the multistage build, kaniko deletes the filesystem between the stages, the /root/.docker folder is not protected. Only if you use the /kaniko this configuration will be protected.

If this (not) solves your issue, please post your result...

script:
- export DOCKER_CONFIG=/kaniko/.docker/
- export GOOGLE_APPLICATION_CREDENTIALS=/kaniko/.docker/config.json  
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG

@mtsyganov
Copy link
Author

@miguelitoq76 yes, this is a working solution. thank you

@priyawadhwa priyawadhwa added the area/multi-stage builds issues related to kaniko multi-stage builds label Jul 25, 2019
@tejal29 tejal29 added priority/p3 agreed that this would be good to have, but no one is available at the moment. work-around-available kind/bug Something isn't working labels Oct 2, 2019
@cvgw
Copy link
Contributor

cvgw commented Nov 16, 2019

I'm not clear whether there is a bug here. Config for private registries should be located at /kaniko/.docker/config.json as outlined here.

When I created a random file (say at /root/.docker/config.json) in the Kaniko container before running the executor the file can still be found at the end of the build.

If there is an issue here with losing files please provide some repro steps. Thanks!

@cvgw cvgw added needs-reproduction norepro core team can't replicate this issue, require more info and removed kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment. labels Nov 16, 2019
@cvgw cvgw self-assigned this Nov 16, 2019
@cvgw
Copy link
Contributor

cvgw commented Jan 18, 2020

Closing this as there are no updates. Feel free to reopen if there is still an issue

@cvgw cvgw closed this as completed Jan 18, 2020
@Type1J
Copy link

Type1J commented Feb 24, 2021

I put the Kaniko executor in a container and had it perform a multistage build, and it wiped out /bin and /usr. I had the same problem with config.json, but I can put it in /kaniko/.docker/. I need to move /bin and /user in to /kaniko as well, and reset the PATH to make that work.

I know one could work around it, as I described above, but why does Kaniko have this system destroying behavior?! I thought about running it before I put it in a container, but I saw that it needed root to work, and I just happened to not be lazy and run sudo executor ... today. I'm glad I had coffee! I saw in the source something about a whitelist preventing certain directories from being destroyed. Can I add to this list with command line options or config files, or do I need to recompile?

@pascalgulikers
Copy link

pascalgulikers commented Mar 4, 2021

I'm experiencing the same issue, Kaniko executer v1.5.1. /busybox is being deleted in multistage build

@qalinn
Copy link

qalinn commented Jun 7, 2021

The same for 1.6.0

@Type1J
Copy link

Type1J commented Jun 7, 2021

The way that Kaniko seems to work is to use it's local filesystem, not a chroot (or similar) filesystem, and that's why it destroys the system. Apps written in C or C++ will fail to find the dynamic linker file (located at /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2), so you'll have to put it in one of the directories that are not destroyed, and then edit the ELF files to point to it correctly.

Go binaries (like terraform and it's plugins, which are separate executables, not dynamic libraries) are not dynamic ELF files by default (they are statically linked by default), so they still work. C, C++, and Rust binaries can be configured to by static binaries as well: With C, you'll probably want to use MUSL for your build tools, and there's just a line that you add to a config file, and recompile for Rust based build tools that you might want to use.

This sort of solution (Kaniko destroying everything instead of restoring the original state on the second image) is less than optimal.

@henrysachs
Copy link

henrysachs commented Apr 21, 2022

Hey there, I'm still facing this issue. My docker config is deleted in the multistage build so the image cant be pushed at the end. Is there a way to add directories to the whitelist?

EDIT: just had a quick look through the code and it seems that its currently only available when using kaniko as a library

@henrysachs
Copy link

my dockerfile kinda looks like this:

step 1: create docker config with auth in /root/.docker/config.json

step2: run kaniko with a multistage dockerfile (we use this one):

FROM  golang:1.18-alpine AS build
WORKDIR /go/src/
# renovate: datasource=github-tags depName=mozilla/sops
ENV SOPS_VERSION="v3.7.1"
RUN apk add --update --no-cache ca-certificates curl git gcc musl-dev
COPY . /src
# not suitable for local builds with these flags
RUN  GO111MODULE=on go build -ldflags "-linkmode external -extldflags -static" -a main.go
RUN curl https://github.com/mozilla/sops/releases/download/${SOPS_VERSION}/sops-${SOPS_VERSION}.linux -o /usr/local/bin/sops -LJ \
    && chmod 0755 /usr/local/bin/sops \
    && chown root:root /usr/local/bin/sops 

FROM crane:debug@sha256:66336f608f9a219c93fdb610fbcb491b5d38e72c250a191f2c5efda6427028c1 AS crane

FROM kaniko-project/executor:v1.8.1-debug
# SSL_CERT_FILE Is new in crane docker image, so fix it
ENV SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt
ENV SSL_CERT_DIR=/etc/ssl/certs
# Because kaniko wants it this way
ENV DOCKER_CONFIG=/root/.docker/
ENV KO_DATA_PATH=/var/run/ko
COPY --from=build /go/src/main /go2ecr/go2ecr
RUN ["/busybox/mkdir", "-p", "/root/.docker"]
RUN ["/busybox/touch", "/root/.docker/config.json"]
RUN echo -e "{}" > /root/.docker/config.json
ENTRYPOINT ["/busybox/sh", "-c"]

when building this dockerfile with kaniko the /root/.docker/config.json is deleted at the end and cant be pushed to its destinations

@Type1J
Copy link

Type1J commented Apr 22, 2022

That has been our experience. With Docker Engine being forced out of Kubernetes, we've had to move our Jenkins server to a VM. It feels like a huge step backwards. I thought that Kaniko would fix that problem, but it looks like it doesn't have enough control to make multistage builds work. I'd rather run QEMU in a container and get Docker Engine building the container.

@qalinn
Copy link

qalinn commented Apr 25, 2022

Hello,

I have the same issue and and I add/install what you need under /kaniko folder which is skipped to be deleted.

@Type1J
Copy link

Type1J commented Apr 25, 2022

@qalinn Yes, and that requires having a very specific structure to your images, and it must be used in every stage. If this could be hidden from the perspective of the Dockerfile, then that might work, but if it's not transparent (meaning that the same Dockerfile could be used in both Docker Engine and Kaniko), then it would force all parties (devs, QA and production environments) using a Dockerfile to exclusively use Kaniko, which can't be forced on them.

@Askill
Copy link

Askill commented Jul 18, 2022

Problem still exists.

@devopsinthecloud
Copy link

Hi, I'm also experiencing an issue. I'm building a Dockerfile with kaniko in GCP CloudBuild. Among other things I need to copy a config to another folder and replace one line there. For some reason after running these 2 commands, kaniko deletes the said file.
RUN cp /usr/local/etc/php-fpm.d/www.conf /tmp
RUN sed -i "s|listen = 127.0.0.1:9000|listen = 127.0.0.1:9001|g" /tmp/www.conf

So the file should be found in /tmp but it never persists. I also tried other directories and still have the same issue.
It works perfectly on my local machine but not in the cloud, so i'm sure it's a problem with kaniko.

@Type1J
Copy link

Type1J commented Aug 17, 2022

@devopsinthecloud If you put things in the /kaniko directory, then when Kaniko deletes things, it will leave those thing alone.

@dmthomson
Copy link

@devopsinthecloud If you put things in the /kaniko directory, then when Kaniko deletes things, it will leave those thing alone.

Nice in theory, however in my case my aws creds are written to /root/.aws/credentials and during the multistage build it appears this gets removed. If I put them in /kaniko/.aws/credentials it doesn't work.

Also I noticed exporting my credentials doesn't work either. They show up in the environment but I get an authentication error to ECR

error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "repoaddress:tagname": Post "repoaddress:tagname/blobs/uploads/": EOF

@Type1J
Copy link

Type1J commented Feb 6, 2023

A docker login ... command writes a file, which kaniko deletes. You basically have to rewrite your Dockerfile to keep stashing things in /kaniko, and then copying them back to where they should be. It's too manual. Have you tried podman?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/multi-stage builds issues related to kaniko multi-stage builds help wanted Looking for a volunteer! needs-reproduction norepro core team can't replicate this issue, require more info work-around-available
Projects
None yet
Development

No branches or pull requests