Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi stage build ends with "no space left on device", no matter the free space you provide #2033

Closed
bmalynovytch opened this issue Apr 5, 2022 · 7 comments · Fixed by #2863
Assignees
Labels
area/behavior all bugs related to kaniko behavior like running in as root area/multi-stage builds issues related to kaniko multi-stage builds categorized differs-from-docker interesting issue/hang issue/oom ok-to-close? possible-dupe priority/p0 Highest priority. Break user flow. We are actively looking at delivering it. works-with-docker

Comments

@bmalynovytch
Copy link

Actual behavior
When trying to build a multi stage container, Kaniko drains all the local storage available (source images weight between 1Gi to 2Gi, Kaniko drains up to 500Gi of storage) and crashes with no space left on device.

Logs

INFO[0007] Resolved base name docker.io/bitnami/redis:6.2 to redis 
INFO[0007] Resolved base name docker.io/bitnami/memcached:1.6.13 to memcached 
INFO[0007] Resolved base name docker.io/zulip/zulip-postgresql:10 to postgresql 
INFO[0007] Resolved base name docker.io/zulip/docker-zulip:4.10-0 to zulip 
INFO[0007] Retrieving image manifest docker.io/bitnami/redis:6.2 
INFO[0007] Retrieving image docker.io/bitnami/redis:6.2 from registry index.docker.io 
INFO[0008] Retrieving image manifest docker.io/bitnami/memcached:1.6.13 
INFO[0008] Retrieving image docker.io/bitnami/memcached:1.6.13 from registry index.docker.io 
INFO[0009] Retrieving image manifest docker.io/zulip/zulip-postgresql:10 
INFO[0009] Retrieving image docker.io/zulip/zulip-postgresql:10 from registry index.docker.io 
INFO[0010] Retrieving image manifest docker.io/zulip/docker-zulip:4.10-0 
INFO[0010] Retrieving image docker.io/zulip/docker-zulip:4.10-0 from registry index.docker.io 
INFO[0011] Retrieving image manifest docker.io/bitnami/rabbitmq:3.9.13 
INFO[0011] Retrieving image docker.io/bitnami/rabbitmq:3.9.13 from registry index.docker.io 
INFO[0012] Built cross stage deps: map[0:[/] 1:[/] 2:[/] 3:[/]] 
INFO[0012] Retrieving image manifest docker.io/bitnami/redis:6.2 
INFO[0012] Returning cached image manifest              
INFO[0012] Executing 0 build triggers                   
INFO[0016] Saving file . for later use                  
error building image: could not save file: write /kaniko/0/dev/full: no space left on device

Dockerfile (extract)

FROM docker.io/bitnami/redis:6.2         AS redis
FROM docker.io/bitnami/memcached:1.6.13  AS memcached
FROM docker.io/zulip/zulip-postgresql:10 AS postgresql
FROM docker.io/zulip/docker-zulip:4.10-0 AS zulip

FROM docker.io/bitnami/rabbitmq:3.9.13

COPY --from=redis      / /jails/redis
COPY --from=memcached  / /jails/memcached
COPY --from=postgresql / /jails/postgresql
COPY --from=zulip      / /jails/zulip
[...]

Command

/kaniko/executor --context . --dockerfile ./Dockerfile --destination [...]

Kaniko versions affected
Tried with executor:v1.6.0-debug and executor:v1.8.0-debug

Expected behavior
Build should work and end with a functional image (like local Docker does)

To Reproduce
Steps to reproduce the behavior:

  1. Use the Dockerfile snippet provided
  2. Build using Kaniko

Triage Notes for the Maintainers

Description Yes/No
Please check if this a new feature you are proposing
Please check if the build works in docker but not in kaniko
Please check if this error is seen when you use --cache flag
Please check if your dockerfile is a multistage dockerfile
@shawnweeks
Copy link

shawnweeks commented Apr 5, 2022

It looks like this only affects builds that copy from / . There is probably some sort of symlink loop or something involved here. Here is my minimal reproduction of this.

FROM redhat/ubi8:latest as build

RUN dnf install nc -y

FROM scratch
COPY --from=build / /

@pmhahn
Copy link

pmhahn commented May 17, 2022

See #960 (comment) : Kaniko is using otiai10/copy#78 as its implementation for copying files, which does not handle device special files as /dev/console or /dev/zeor as special, but copies their content; for /dev/zero that is an endless stream of zeros, which easily fills up any space.

@erNail
Copy link

erNail commented Jul 4, 2022

We have the same issue when trying to build this Dockerfile:

https://github.com/paritytech/polkadot/blob/d22eb62fe40e55e15eb91d375f48cc540d83a47e/scripts/ci/dockerfiles/polkadot/polkadot_builder.Dockerfile#L1-L36

Any news on a workaround or a fix for this problem?

@sumkincpp
Copy link
Contributor

Seems to be also relevant to hard links being copied, see #1743

@aaron-prindle aaron-prindle added area/multi-stage builds issues related to kaniko multi-stage builds issue/hang issue/oom priority/p1 Basic need feature compatibility with docker build. we should be working on this next. priority/p0 Highest priority. Break user flow. We are actively looking at delivering it. differs-from-docker possible-dupe ok-to-close? works-with-docker area/behavior all bugs related to kaniko behavior like running in as root categorized labels Jun 21, 2023
@JeromeJu
Copy link
Collaborator

JeromeJu commented Oct 17, 2023

Looking into this, it seems that with the latest image >v1.15.0, the output is:

INFO[0004] Saving file . for later use                  
error building image: could not save file: copying file: symlink /usr/share/ca-certificates/mozilla/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.crt /kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko
...
...
...
kaniko/0/etc/ssl/certs/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem: file name too long

which comes from: otiai10Cpy.Copy util.CopyFileOrSymlink(p, dstDir, config.RootDir) corresponding to the comment.

@JeromeJu JeromeJu removed the priority/p1 Basic need feature compatibility with docker build. we should be working on this next. label Oct 19, 2023
@JeromeJu
Copy link
Collaborator

JeromeJu commented Oct 31, 2023

Verified with v1.9.0 when this bug was filed. The error message was write /kaniko/0/dev/full: no space left on device instead of the recursive name that leads to filename too long.

The memory usage is ~3 times more than what it takes from v1.18.0 from docker stats

@shapirus
Copy link

shapirus commented Nov 1, 2023

why doesn't kaniko just use cp -a? I mean, if the library it uses to copy files is broken, then why not use a standard tool that's been able to handle this literally for decades?

JeromeJu added a commit to JeromeJu/kaniko that referenced this issue Nov 21, 2023
This commit adds the skip option for otiai10.Copy to skip the /kaniko
directory when the root is being copied. The files under /kaniko dir
should be ignored and thus this shall not cause any loss of information.

fixes: GoogleContainerTools#2033
JeromeJu added a commit to JeromeJu/kaniko that referenced this issue Nov 21, 2023
This commit adds the skip option for otiai10.Copy to skip the /kaniko
directory when the root is being copied. The files under /kaniko dir
should be ignored and thus this shall not cause any loss of information.

fixes: GoogleContainerTools#2033
JeromeJu added a commit to JeromeJu/kaniko that referenced this issue Nov 21, 2023
This commit adds the skip option for otiai10.Copy to skip the /kaniko
directory when the root is being copied. The files under /kaniko dir
should be ignored and thus this shall not cause any loss of information.

fixes: GoogleContainerTools#2033
aaron-prindle pushed a commit that referenced this issue Nov 29, 2023
This commit adds the skip option for otiai10.Copy to skip the /kaniko
directory when the root is being copied. The files under /kaniko dir
should be ignored and thus this shall not cause any loss of information.

fixes: #2033
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/behavior all bugs related to kaniko behavior like running in as root area/multi-stage builds issues related to kaniko multi-stage builds categorized differs-from-docker interesting issue/hang issue/oom ok-to-close? possible-dupe priority/p0 Highest priority. Break user flow. We are actively looking at delivering it. works-with-docker
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants