-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exporting to image takes a lot of time #1704
Comments
This is likely due to containerd differ. Post a full reproducer so it can be verified. It might depend on the time precision of your backing file system. If it is very low containerd differ falls back to comparing files by data. |
The backing filesystem is ext4 -
This is running Azure Kubernetes service, and the file system is on Azure persistent disk volume mounted at /home/user/.local/share/buildkit . I also tried removing the persistent volume mount at /home/user/.local/share/buildkit , but got similar results. The node on which it is running is Ubuntu 16.04.7 LTS, kernel 4.15.0-1093-azure. |
@tonistiigi It is indeed the case that the containerd differ is falling back to comparing files by data. However the issue is not with the backing filesystem, i notice that the modify times of most files in the image have 0 values for nanoseconds in their last modified times. To reproduce, you can use this dockerfile -
Build - After initial build, if i create a new file If we look inside the docker image
the modify times have 0 nanos. |
So what is the time precision of the files you create during build. Can you confirm these files have nanoseconds in your fs? Unfortunately don't see any other solution for this than to break away from containerd differs and implement our own high-performance ones. Not an easy task. |
In the example
nothing much is happening. I can confirm the files on my host filesystem have nanoseconds. I do not build When build with docker, i do notice however that the files loose their nano precision ( probably because during docker build context is copied through tar ? ) If the base image is build with docker, and its files do not have nanos, should it actually matter? I added some additional debug logs in
If we compare lower and upper layers, and lower layer has files with 0 nanos which are not overridden in upper layer, then it should not be a problem? |
The differ is not overlay specific(doesn't know about what is upper/lower). It just compares two mountpoints. |
ok. i don't have a lot of context, but it does not seem right that the entire filesystem gets read and compared.
If i have a dockerfile like
no directory except /tmp is modified from the base image, yet in the logs i can see lines like
It compares the entire tree and all these comparisons are done by file content due to missing nanos in the files / directories. |
@saswatac hi saswatac, Is there any solution now? I encountered the same problem. |
@gjymosha unfortunately not. If you are running the build in a cloud environment, you may want to check that disk i/o is not being throttled, for me increasing the disk size helped improved the times, but still it is not satisfactory. @tonistiigi i looked at the code a bit further, i am wondering whether instead of doing a double diff walk in https://github.com/containerd/continuity/blob/master/fs/diff.go#L111 , we can implement https://github.com/containerd/continuity/blob/master/fs/diff_unix.go#L33 and do |
@saswatac Thank you very much for your reply, I have another question: as the snapshots generated in build step are already changeset files, why not directly use the snapshot content to generate a new layer, but recalculate the new content by comparing the two mountpoints? @tonistiigi |
FYI: opened a draft PR to solve this at #2181. |
i just noticed that this did not make it to the 0.9.1 release. When is it planned to include this fix? |
This is too big to pick for patch release and will come with v0.10 |
Is there any workaround for this issue until 0.10 release? For some reason this only happens to us with one specific build while all others - on the same platform - work fast and as expected. |
Came here to check for the same issue which occurred in Multi-staged build.
Above is the command which I used. My Dockerfile has 2 staged Build. So, in order to debug the 1st stage I executed the above command. |
Should be fixed as the current release is v0.18, but I still get huge image export times. Got timing of exporting simple, a little bit customized but with fast build times (excluding exporting) mongodb image, the build time with image export was:
|
Description
After an initial build, i am making changes in my code and doing a follow up build. The changes affects only the last couple of layers.
During build, the cache gets used as expected, and last two steps in the dockerfile are run which is quite fast.
But it takes extremely long to export the image, exporting the layers takes most of the time.
Steps to reproduce the issue:
build command
buildctl build --frontend dockerfile.v0 --opt build-arg:BASE_IMAGE=$BASE_IMAGE --local context=. --local dockerfile=. --output type=image,name=$IMAGE,push=true
touch test
buildctl build --frontend dockerfile.v0 --opt build-arg:BASE_IMAGE=$BASE_IMAGE --local context=. --local dockerfile=. --output type=image,name=$IMAGE,push=true
Describe the results you received:
Logs of initial build -
Logs of the second build -
Describe the results you expected:
Why does exporting the layers while exporting image take this much time? ( 244 sec initial build, 89 sec in the second build)
Is there some any options i am missing which can help speed it up ?
version
Any other relevant information:
I am running buildkit daemon rootless image in kubernetes as a sidecar container. buildctl is run from the development container in the pod.
The text was updated successfully, but these errors were encountered: