-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pushing images to dockerhub stopped working #1209
Comments
Ah, sidenote, i confirmed that the credentials still work for pushing images to dockerhub, and they work fine, both manually and as mentioned with the old kaniko version. For now we have pinned the kaniko version to this one |
Is this duplicate of #245? |
I don't think so, |
Encountering this issue as well. The last working version seems to be Could #957 be causing the issue perhaps? |
it seems #1005 describes the same problem with kaniko |
i can verify that for us too the lastest working kaniko version is
|
@gebi thanks, I confirm with the This is my script
Used variables to make it more meaningful for newbies (like myself 1 hour ago) |
This is also happening to us using the lastest version of "debug" (May 6, 2020) attempting to push to GCR. Seeing this error:
Changing to tag |
@macrotex Can you please use kaniko v0.22.0 https://github.com/GoogleContainerTools/kaniko/releases/tag/v0.22.0 and let us know if it still exists. |
Version 0.22.0 fixed my issue. |
I tried to use https://index.docker.io/v1/ instead of v2 one as docker registry url, which seemed to work for me. |
Hey I still get the aws_credentials error:
|
I too am getting the same error with v0.22.0. I tested this and the latest version to work is 0.16.0 as @gebi mentioned. |
This seems to work for docker hub : $ export DOCKERHUB_AUTH="$(echo -n $DOCKER_HUB_REPOSITORY_USERNAME:$DOCKER_HUB_REPOSITORY_PASSWORD | base64)"
$ echo "{\"auths\":{\"https://index.docker.io/v1/\":{\"auth\":\"${DOCKERHUB_AUTH}\"}}}" > docker.json
$ docker run --rm -v $(pwd):/workspace -v $(pwd)/docker.json:/kaniko/.docker/config.json:ro gcr.io/kaniko-project/executor:v0.22.0 --context=dir:///workspace --dockerfile=Dockerfile --destination=foo/bar:latest |
thanks @ymage. I was using the v2 docker endpoint instead of v1. |
+1. I am able to upload docker images with https://index.docker.io/v1 but not https://index.docker.io/v2 with the latest kaniko debug executor image. Is anyone working on this issue? |
And there i was sitting the last sunday half a day thinking to be that stupid to build a simple image which i wanted to push to my private docker hub. auths: [https://index.docker.io/v2/]
auths: [https://index.docker.io/v1/]
What combination should i use, since i have no idea what the difference it's making?
|
works for me:
UP. It seems the real reason was UserAgent in config.json:
After removing this section I haven't have any problems with pushing (even with original debug-539ddefcae3fd6b411a95982a830d987f4214251) |
After being struggled all the day with the issue, trying to push to dockerhub with a previous version of kaniko, debug-v0.18.0, which was fine few months ago in the same context, as @gebi, I was able to push the image using @tejal29 may it be related to dockerhub hostname or default image path that has maybe changed and is no more compatible with kaniko (in old versions at least)? I'm using KO: $ docker run --rm --entrypoint "" -v /host/path/to/kaniko/config.json:/kaniko/.docker/config.json -v /host/path/to/dockerfile/directory/kaniko/20200825-001/build1:/workspace gcr.io/kaniko-project/executor:debug-v0.18.0 /kaniko/executor --context /workspace --dockerfile /workspace/Dockerfile --destination index.docker.io/tanguydelignieres/kaniko_bugs_20200825-001_build1:debug-v0.18.0
INFO[0003] Resolved base name alpine:3.9 to alpine:3.9
INFO[0003] Resolved base name alpine:3.9 to alpine:3.9
INFO[0003] Retrieving image manifest alpine:3.9
INFO[0005] Retrieving image manifest alpine:3.9
INFO[0009] Built cross stage deps: map[]
INFO[0009] Retrieving image manifest alpine:3.9
INFO[0011] Retrieving image manifest alpine:3.9
INFO[0014] Skipping unpacking as no commands require it.
INFO[0014] Taking snapshot of full filesystem...
INFO[0014] Resolving paths
INFO[0014] CMD echo "OK"
error pushing image: failed to push to destination index.docker.io/tanguydelignieres/kaniko_bugs_20200825-001_build1:debug-v0.18.0: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:tanguydelignieres/kaniko_bugs_20200825-001_build1 Type:repository] map[Action:push Class: Name:tanguydelignieres/kaniko_bugs_20200825-001_build1 Type:repository] map[Action:pull Class: Name:library/alpine Type:repository]]
$ OK with $ docker run --rm --entrypoint "" -v /host/path/to/kaniko/config.json:/kaniko/.docker/config.json -v /host/path/to/dockerfile/directory/kaniko/20200825-001/build1:/workspace gcr.io/kaniko-project/executor:debug-v0.18.0 /kaniko/executor --registry-mirror index.docker.io --context /workspace --dockerfile /workspace/Dockerfile --destination index.docker.io/tanguydelignieres/kaniko_bugs_20200925-001_build1:debug-v0.18.0
INFO[0002] Resolved base name alpine:3.9 to alpine:3.9
INFO[0002] Resolved base name alpine:3.9 to alpine:3.9
INFO[0002] Retrieving image manifest alpine:3.9
INFO[0004] Retrieving image manifest alpine:3.9
INFO[0007] Built cross stage deps: map[]
INFO[0007] Retrieving image manifest alpine:3.9
INFO[0008] Retrieving image manifest alpine:3.9
INFO[0010] Skipping unpacking as no commands require it.
INFO[0010] Taking snapshot of full filesystem...
INFO[0010] Resolving paths
INFO[0010] CMD echo "OK"
$ I confirm I did not face the problem with |
I try with Kaniko v1.0.0 on Docker Hub v2 endpoint, it fails. It works using v1. |
Thank you folks, I updated the docs to use |
@tanguydelignieresaccenture i am still trying to understand why adding |
To get around GoogleContainerTools/kaniko#1209 we need to auth using the DockerHub v1 API. As that variable is the "registry" and we have to use a FQDN with Protocol we have to convert to standard registry format for tagging the image, otherwise there's an issue with the auth mechanism where it attempts to normalize the URL and incorrectly parses it. GoogleContainerTools/kaniko#1209
To get around GoogleContainerTools/kaniko#1209 we need to auth using the DockerHub v1 API. As that variable is the "registry" and we have to use a FQDN with Protocol we have to convert to standard registry format for tagging the image, otherwise there's an issue with the auth mechanism where it attempts to normalize the URL and incorrectly parses it. GoogleContainerTools/kaniko#1209
I'm having the same issue |
Kaniko does (no longer?) seem to support authenticating against "docker-hub"'s v2-api-endpoint, as also discussed on GitHub: GoogleContainerTools/kaniko#1209 The suggested workaround seems to be to use the v1-endpoint. So hackily inject it when generating a docker-cfg for kaniko.
I used https://index.docker.io/v1/ and it worked In
=============== using
|
Ok sooo, since v1 is deprecated, I don't believe using v1 is the safe option here or is it? Which makes me think that, from debug-v0.16.0 to debug-v0.19.0 something change that made the registries think kaniko is an old docker client? and thus blocking kaniko from pushing/pulling on v2? I don't know. |
After hours upon hours on this issue and keep getting the Unauthorized error, I did a
When doing a |
I'm having the exact same problem, did you ever find the fix to this? |
The So if you're trying to auth to a Harbor registry at |
I've tried this without the In our setup, we are also using a service account to enable us to pull images from AWS ECR for our base image. But then our final built image is pushed to our own private registry. Is there any chance that using a combination of registries is preventing us from pushing to our custom registry? |
Have you set a proxy and forgot to |
no, we have not set any proxies. My config.json looks like this: {
"auths": {
"private.registry.net": {
"username": "admin",
"password": "XXX",
"auth": "YYY="
}
}
} and I am setting kaniko destination value to The error I'm getting from kaniko is:
|
So first of all, it probably isn't the cause of the issue, but it would help resolve confusion to not use both After removing those, make sure that |
it's okay I've finally sorted it, very daft mistake, but basically just had the config mounted in the wrong place... Thank you guys for helping out! |
you can provide part of the code where you mount? |
Did you mange to resolve this issue? |
It's still not working with v2 but works with v1 in the |
are there any configuration changes required to make it work with docker api v2 (https://index.docker.io/v2? |
+1 |
Just tested it today with My gitlab-ci job: deploy:
stage: deploy
image:
name: gcr.io/kaniko-project/executor:v1.14.0-debug
entrypoint: [""]
script:
- echo -n "{\"auths\":{\"https://index.docker.io/v2/\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination=docker.io/<user>/<img_name>:<my_tag> |
Still doesn't work on latest image with private gitlab registry.
followed by Gitlab 404 page |
Which version of Gitlab are you using ?? It works for me on a private Gitlab with gcr.io/kaniko-project/executor:v1.22.0-debug |
@mlec1 probably an old I used, it was long time ago for me |
Building an image with kaniko behind a proxy works for me. (see https://archives.docs.gitlab.com/16.11/ee/ci/docker/using_kaniko.html)
|
Actual behavior
Kaniko exits with exit code 1 with the following message and and does not build the image:
This worked with the same build pipeline and no changes 3 months ago with the following image:
Expected behavior
Kaniko to upload image to dockerhub like the version 3 months ago was able to.
There where no changes, and it works if i go back to an older kaniko version.
To Reproduce
Steps to reproduce the behavior:
Additional Information
Please provide either the Dockerfile you're trying to build or one that can reproduce this error.
Please provide or clearly describe any files needed to build the Dockerfile (ADD/COPY commands)
Triage Notes for the Maintainers
--cache
flagThe text was updated successfully, but these errors were encountered: