Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot chain builds using minikube default registry #886

Open
nicolaferraro opened this issue Nov 26, 2019 · 0 comments
Open

Cannot chain builds using minikube default registry #886

nicolaferraro opened this issue Nov 26, 2019 · 0 comments
Labels
area/registry For all bugs having to do with pushing/pulling into registries kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment.

Comments

@nicolaferraro
Copy link

Actual behavior

We use Kaniko to build images in Camel K and sometimes we chain images, i.e. we build img1 from our base image (fabric8/s2i-java:3.0-java8) and next img2 using img1 as base. This setting works on most registries, but not on Minikube, where the build of the second image fails.

This is what happens with cache enabled (when building img2):

�[37mDEBU�[0m[0000] Copying file /workspace/builder-620876658/package/context/Dockerfile to /kaniko/Dockerfile 
�[37mDEBU�[0m[0030] Skip resolving path /kaniko/Dockerfile       
�[37mDEBU�[0m[0030] Skip resolving path /workspace/builder-620876658/package/context 
�[37mDEBU�[0m[0030] Skip resolving path /workspace/cache         
�[37mDEBU�[0m[0030] Skip resolving path                          
�[37mDEBU�[0m[0030] Skip resolving path                          
�[37mDEBU�[0m[0030] Skip resolving path                          
�[36mINFO�[0m[0030] Resolved base name 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080 to 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080 
�[36mINFO�[0m[0030] Resolved base name 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080 to 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080 
�[36mINFO�[0m[0030] Downloading base image 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080 

... stuck here for some minutes ...

INFO[0184] Error while retrieving image from cache: getting file info: stat /workspace/cache/sha256:b10a49f894bda9fcc289404442b006e7a21a84b849a46d3de6eebb7d7f35d8d6: no such file or directory
INFO[0184] Downloading base image 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080
INFO[0338] Built cross stage deps: map[]
INFO[0338] Downloading base image 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080
INFO[0492] Error while retrieving image from cache: getting file info: stat /workspace/cache/sha256:b10a49f894bda9fcc289404442b006e7a21a84b849a46d3de6eebb7d7f35d8d6: no such file or directory
INFO[0492] Downloading base image 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080

Before getting into the error, the logs are stuck for several minutes into the downloading stage.

Container configuration is like the following:

  - args:
    - --dockerfile=Dockerfile
    - --context=/workspace/builder-882600208/package/context
    - --destination=10.98.49.20/default/camel-k-kit-bnekkbm6in4lp73omlh0:1231828
    - --cache=false
    - --cache-dir=/workspace/cache
    - --insecure
    - --insecure-pull
    - --verbosity
    - debug
    image: gcr.io/kaniko-project/executor:v0.14.0

Same happens with cache=true when we warm up the cache in advance. The Dockerfile starts from 10.98.49.20/default/camel-k-kit-bnekabe6in4lp73omlg0:1230080.

Expected behavior
This error happens on Minikube only. It seems kaniko is not able to pull images from that registry.

No problems e.g. with docker hub.

To Reproduce
Steps to reproduce the behavior:

  1. Easy with camel k. Let me know if you want a reproducer. Looking for suggestions on what can be the problem at the moment.

Additional Information

  • Kaniko Image (fully qualified with digest): v0.14.0 (4afd732d4dfa)

Triage Notes for the Maintainers

Description Yes/No
Please check if this a new feature you are proposing
Please check if the build works in docker but not in kaniko
Please check if this error is seen when you use --cache flag
Please check if your dockerfile is a multistage dockerfile
@cvgw cvgw added area/registry For all bugs having to do with pushing/pulling into registries kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment. labels Dec 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/registry For all bugs having to do with pushing/pulling into registries kind/bug Something isn't working priority/p3 agreed that this would be good to have, but no one is available at the moment.
Projects
None yet
Development

No branches or pull requests

2 participants