-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error building image: error building stage GCC #763
Comments
Are you using cacheing? Might be related to #742 |
@denitol Can you provide more information on your docker file? |
same problem
|
my docker image
|
We are also on gitlab/gitlab-runner + k8s. I can reproduce this
with this command:
inside the image "gcr.io/kaniko-project/executor:debug" (sha256:025bd79d3e0699b5f59142b03f7e66916980bd0e32653b9c7e21b561d4e538c3) When the cache is cleared, the build works fine. |
We're running into the same bug which is reproducable on every run as long as the cache exists. Can I help, e.g. do you need another Dockerfile example or debug output? |
You probably need to clear your cache so that it doesn't keep re-using the bad cached layer. |
@cvgw Well, that's not a solution as within the next run a newly created cache would be saved which is also faulty. |
Apologies if I'm misunderstanding you; my suggestion was to clear the cache when going from a known bad version of kaniko to a good version E.G image was previously built with kaniko v0.17.0 and cached |
try this, not working too |
> Step improbable-eng#1: error building image: error building stage: failed to execute command: extracting fs from image: removing whiteout .wh.workspace: unlinkat //workspace: device or resource busy See GoogleContainerTools/kaniko#763
With kaniko 0.24.0 this problem still persists. First run without cache is successful, but on the second run, with the cached layer, it fails:
|
I was thinking: would it be possible to write (perhaps outside kaniko) some script that retires the job without cache if the first one fails? Probably using the |
Can confirm that this is still an issue with Kaniko 1.2.0 (and Gitlab Runner 12.8.0 + kubernetes executor) |
Is there a workaround? |
Same |
And with v1.3 |
I've experienced the same issue while using warmer. It occurs when the executor is creating a .tag.gz file when running under Docker.
Edit: I'm using the latest debug image. |
I'm still experience this issue with Kaniko First run works fine, however when using the cached layer it throws an error:
Running with docker-executor on gitlab-runner |
Tested again with version |
I've changed the docker storage driver from the default As a note if you use a gitlab-runner with docker executor: This has fixed my issues I faced currently. |
I encountered a similar when building a Docker image using Kaniko on k8s. Unfortunately, the Dockerfile refers to enterprise Docker images, so that I could post steps to reproduce it here.
It fails after
Seems to me that unpacking rootfs failed. Note: The Dockerfile can be built successfully using Docker. |
I experienced exactly the same issue. Did you resolve it? |
In my case, taking out registry mirror setting can fix the issue... Don't know how this would related. |
Have you solved this problem yet? I'm having the same trouble |
getting same issue |
Same issue. Caching issue? |
same issue with caching, v1.9.1-debug |
same issue with caching, v1.9.1-debu |
I found a workaround for my issue |
any updates ? I'm facing the same issue when using the cache. the first build works fine but the second fails |
I've had this problem in gitlab runner for a long time, but just now tested it again and it works for me with the following config / versions:
|
On configuration
I have a problem
Error appears аfter I change some in the dockerfile. If nothing changes in dockerfile, build works, cache is used. |
I try build gcc image with kaniko
and get error
kaniko image : gcr.io/kaniko-project/executor:debug
stack:
k8s, gitlab, gitlab-runner(into k8s)
The text was updated successfully, but these errors were encountered: