Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker: unable to push on a some registry when FROM clause contains a private registry different that target one or a proxy url #808

Closed
antechrestos opened this issue Oct 7, 2019 · 10 comments · Fixed by #957
Labels
area/behavior all bugs related to kaniko behavior like running in as root priority/p3 agreed that this would be good to have, but no one is available at the moment. work-around-available

Comments

@antechrestos
Copy link
Contributor

antechrestos commented Oct 7, 2019

Actual behavior

I am currently trying to build and push an image on a openshift registry. The build is made from outside openshift cluster and push in the end to the openshift registry.

I am able to do it correctly with docker and the docker push is fine.

However, when I do it with kaniko it end with the following message

error pushing image: failed to push to destination default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/test-kaniko/test-kaniko:latest: unsupported status code 401

I though it might be due to credential problem on my side, yet , I also did on laptop the following command

$> docker login -u 'builder' -p '<builder service account token>' default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com

And then mount the image using the option -v $HOME/.docker/config.json:/kaniko/.docker/config.json:ro

Yet, I finally end with the UNAUTHORIZED status code.

⚠️ I also tried to track the UNAUTHORIZED status code with wireshark. This late one just sees all the previous OKstatus code. I am tempted to say that the underlying library does not handle the large auth header as openshift password used is a jwt token but I am not that sure as previous call are successful.

The issue is related to the fact that the Dockerfile contains a FROM command whose image is a url through a docker proxy.

To Reproduce
Steps to reproduce the behaviour, you can use openshift as I spotted it on it:

  1. Create a dockerfile whose base image is on a private different registry that the target one or through a docker proxy.
  2. create a project on openshift and an image stream
  3. do your docker login with a buildertoken
  4. You can do a docker build & push to validate your token
  5. Launch kaniko docker image docker run --rm --entrypoint "" -it -v $PWD:/sources -v $HOME/.docker/config.json:/kaniko/.docker/config.json:ro gcr.io/kaniko-project/executor:debug sh
  6. Build with kaniko /kaniko/executor --context "/sources" --dockerfile "/Dockerfile" --destination "default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/test-kaniko/test-kaniko:latest" --verbosity info --skip-tls-verify --skip-tls-verify-pull
Description Yes/No
Please check if this a new feature you are proposing
  • No
Please check if the build works in docker but not in kaniko
  • Yes
Please check if this error is seen when you use --cache flag
  • Yes but also without
Please check if your dockerfile is a multistage dockerfile
  • No
@ccremer
Copy link

ccremer commented Oct 9, 2019

We also experience this. Interestingly enough, we have a GitLab CI pipeline that builds 2 images ("app" and "echo"), both Jobs are very similar as they extend the same base job ".build".

However, the echo image can be built and pushed fined, while the app cannot.
.gitlab-ci.yml:

.build:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  variables:
    TARGET_STAGE: production
    IMAGE_TAG: ${CI_COMMIT_SHA}
    REGISTRY_USER: none
    REGISTRY_PASSWORD: ${KUBE_TOKEN}
  environment:
    name: integration
    url: https://${KUBERNETES_URL}/console/project/${PROJECT_PREFIX}-${PROJECT}-${CI_ENVIRONMENT_NAME}/overview
  before_script:
  - echo '{"auths":{
    "'${IMAGE_REGISTRY}'":{"username":"'${REGISTRY_USER}'","password":"'${REGISTRY_PASSWORD}'"},
    "'${CI_REGISTRY}'":{"username":"'${CI_REGISTRY_USER}'","password":"'${CI_REGISTRY_PASSWORD}'"}}}' >
    /kaniko/.docker/config.json
  script:
  - echo "destination=${IMAGE}:${IMAGE_TAG}, context=${DOCKER_CONTEXT}, dockerfile=${DOCKER_FILE}"
  - sleep 10000000
  - >-
    /kaniko/executor
    --context ${DOCKER_CONTEXT}
    --destination ${IMAGE}:${IMAGE_TAG}
    --target ${TARGET_STAGE}
    --dockerfile ${DOCKER_FILE}
    --build-arg GIT_COMMIT_REF_NAME=${CI_COMMIT_REF_NAME}
    --build-arg GIT_COMMIT_SHA=${CI_COMMIT_SHA}
    --build-arg GIT_COMMIT_SHORT_SHA=${CI_COMMIT_SHORT_SHA}
    --build-arg BASE_IMAGE_TAG=${BASE_IMAGE_TAG}
  only:
  - master
  # temporary
  - merge_requests

app:
  extends: .build
  variables:
    IMAGE: ${IMAGE_REGISTRY}/${PROJECT_PREFIX}-${PROJECT}-${CI_ENVIRONMENT_NAME}/${API_IMAGE_NAME}
    DOCKER_FILE: docker/php/Dockerfile
    DOCKER_CONTEXT: ${CI_PROJECT_DIR}

echo:
  extends: .build
  variables:
    IMAGE: ${IMAGE_REGISTRY}/${PROJECT_PREFIX}-${PROJECT}-${CI_ENVIRONMENT_NAME}/${ECHO_IMAGE_NAME}
    DOCKER_FILE: docker/echo/Dockerfile
    DOCKER_CONTEXT: ${CI_PROJECT_DIR}/docker/echo

(the sleep is debugging on the CI runner)

The difference between dockerfiles might be noteworthy:

$ cat docker/echo/Dockerfile
FROM node:10-alpine as production
...
$ cat docker/php/Dockerfile
ARG BASE_IMAGE_TAG=master
FROM registry.gitlab.xxx.tld/images/api:${BASE_IMAGE_TAG} as production
...

@ccremer
Copy link

ccremer commented Oct 9, 2019

I actually figured out why it cannot push:
The problematic difference actually is FROM registry.gitlab.xxx.tld ...
When the FROM is from Docker hub or from the same registry as the destination, it works, e.g.

FROM my.registry.com/blah/foo...
---
/kaniko/executor  --destination my.registry.com/blah/bar ...

This does not (2 different private registries):

FROM my.private.registry/blah/foo...
---
/kaniko/executor  --destination my.registry.com/blah/bar ...

Maybe it simply selects the wrong credentials when there are multiple configured in /kaniko/.docker/config.json
This affects 0.12 as well as 0.13.

Can you confirm this?

@antechrestos
Copy link
Contributor Author

@ccremer That's it!

In my case, my Dockerfile looks like

FROM <internal docker proxy>/python:3.6-alpine
...

Removing the reference to the docker proxy made it . I am a little puzzled about the reason of this issue as I am not that expert in the underlying of docker. In my case both openshift and docker proxy are in my company network . I guess that docker push command does something with FROM command with registry url that kaniko does not 🤔

@ccremer
Copy link

ccremer commented Oct 10, 2019

Glad you could workaround the issue. At the same time it's confirmed.
In our case we reverted to docker builds for the time being, even if it's a security issue (root container etc.)
Do you mind renaming the issue title indicating something about multiple private registries? It's not really Openshift specific :)

@antechrestos antechrestos changed the title Docker: unable to push on a openshift registry from external Docker: unable to push on a some registry when FROM clause contains a registry/proxy url Oct 10, 2019
@antechrestos antechrestos changed the title Docker: unable to push on a some registry when FROM clause contains a registry/proxy url Docker: unable to push on a some registry when FROM clause contains a private registry different that target one or a proxy url Oct 10, 2019
@antechrestos
Copy link
Contributor Author

@ccremer Yes I also enriched the description. Thank you for your help. Cheers.

@tejal29 tejal29 added work-around-available area/behavior all bugs related to kaniko behavior like running in as root priority/p3 agreed that this would be good to have, but no one is available at the moment. labels Oct 16, 2019
@antechrestos
Copy link
Contributor Author

Going deeper in the issue with this issue

@antechrestos
Copy link
Contributor Author

Should be solved by upgrading the third party library

@antechrestos
Copy link
Contributor Author

@cvgw @tejal29 what is the process to upgrade third party libraries? As kaniko does not use go modules approach, I guess that is not doable in a PR for security reasons, isn't it?

@cvgw
Copy link
Contributor

cvgw commented Dec 10, 2019

@antechrestos kaniko is using dep so you'll want to follow these instructions to upgrade a vendor pkg https://golang.github.io/dep/docs/daily-dep.html#updating-dependencies

@antechrestos
Copy link
Contributor Author

@cvgw I was on holiday and waiting as a good thing as you migrated to module 😏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/behavior all bugs related to kaniko behavior like running in as root priority/p3 agreed that this would be good to have, but no one is available at the moment. work-around-available
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants