-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker image with digest doesn't pull correctly #11821
Comments
Hi @axsuul! It looks like this error is bubbling up from |
Thanks for looking into this @tgross ! |
Still seeing this in |
@axsuul yeah we haven't had a chance to address it yet. We'll update here once we have more info. |
Hi @axsuul! I chased this down and the tl;dr is that the I reproduced as follows. If we pull the image via the Docker CLI, we get the expected checksum:
But notice the tag Next let's remove that image with
But an image was pulled with the
When I went digging through the See also So while I think might be worth us updating the |
Hey @tgross really appreciate you looking into this! Your reply makes sense but I guess I'm now a bit confused about Docker digests and how they work. Note that how I am referencing these digests worked fine in Kubernetes, only Nomad has been giving me issues about this. For example, taking the postgres image, and using the docker pull postgres:latest@sha256:47bc053e5d09cd5d2b3ff17a1e2142f7051a6d40f9f8028b411f88bf7f973265
docker images --digests | grep postgres returns with
Does that mean the tag |
Hmm okay, what makes this even more odd is this image pulls fine with a digest job "librespeed" {
datacenters = ["dc1"]
reschedule {
delay_function = "constant"
}
group "librespeed" {
network {
mode = "bridge"
}
task "run" {
driver = "docker"
config {
image = "adolfintel/speedtest:latest@sha256:df648683e8e03223254b3f4955976b93a328a5773f3ca8531e84afb7a55c8a01"
}
}
}
} So something seems to be off with the |
The behavior is a little odd, and likely stems from (1) checksums being added late in the Docker API, and (2) tags being mutable vs checksums being immutable. If you take a look at the Register API, the manifest request takes a In Docker, they must be throwing away the tag entirely which is why you're seeing So while we're doing the "technically correct" thing here in assuming you really meant the tag + sha, it diverges noticeably from how Docker does it. I'm going to bring this back to the rest of the Nomad team and see what they think about whether we should treat this divergence as a bug. |
We had a little chat internally and we're going to treat Nomad's behavior as a bug. We'd probably like to catch this at job validation time, but anything in the To close out this bug we'll: use the |
Thanks, note I'm still having seeing this bug in 1.4.3 |
still seeing this with 1.5.9: 2024-01-06T18:00:06.237Z [ERROR] client.driver_mgr.docker: failed getting image id: driver=docker image_name=alpine:latest@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978 error="no such image"
2024-01-06T18:00:06.237Z [INFO] client.alloc_runner.task_runner: Task event: alloc_id=8087424f-84e7-5497-bbb5-e90519a0a195 task=backup type="Driver Failure" msg="no such image" failed=false
2024-01-06T18:00:06.242Z [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=8087424f-84e7-5497-bbb5-e90519a0a195 task=backup error="no such image"
2024-01-06T18:00:23.322Z [ERROR] client.driver_mgr.docker: failed getting image id: driver=docker image_name=alpine:latest@sha256:eece025e432126ce23f223450a0326fbebde39cdf496a85d8c016293fc851978 error="no such image"
2024-01-06T18:00:23.323Z [INFO] client.alloc_runner.task_runner: Task event: alloc_id=051ff74c-19c1-d4da-42d4-f4aca81a722b task=backup type="Driver Failure" msg="no such image" failed=false
2024-01-06T18:00:23.325Z [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=051ff74c-19c1-d4da-42d4-f4aca81a722b task=backup error="no such image"
|
Nomad version
Operating system and Environment details
Ubuntu 20.04
Issue
Nomad seems to be having trouble pulling images with digests still. I see that #4211 was resolved awhile ago but I'm still experiencing something off with this particular image.
If I try to manually pull the image on the client, it works
Then running the job again works since the image is now on the client.
Reproduction steps
Attempt to run job, see that image cannot be pulled. Then pull Docker image and run job again and it runs successfully.
Expected Result
Image with digest can be pulled
Actual Result
Image with digest cannot be pulled
Job file (if appropriate)
Nomad Server logs (if appropriate)
Nomad Client logs (if appropriate)
The text was updated successfully, but these errors were encountered: