-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: fix Docker layer caching #114763
CI: fix Docker layer caching #114763
Conversation
@bors r+ rollup=never p=1 |
☀️ Test successful - checks-actions |
Finished benchmarking commit (28eb857): comparison URL. Overall result: ✅ improvements - no action needed@rustbot label: -perf-regression Instruction countThis is a highly reliable metric that was used to determine the overall result at the top of this comment.
Max RSS (memory usage)This benchmark run did not return any relevant results for this metric. CyclesThis benchmark run did not return any relevant results for this metric. Binary sizeThis benchmark run did not return any relevant results for this metric. Bootstrap: 634.075s -> 632.358s (-0.27%) |
CI is fast again. |
As reported by @klensy on Zulip, Github Actions have recently updated their Docker version from 20.x to 23.x, which enabled the BuildKit build backend by default.
This broke our way of performing Docker layer caching on CI, which immediately made all non-PR CI builds (including try builds) ~1 hour longer (Docker caching didn't work on PR builds before, so it wasn't affected). The moment this started happening can be seen here.
The problem is with the following command:
which returns the intermediate layers as
<missing>
, if BuildKit is enabled. This was investigated by @klensy in #114621. Thanks for that!I will continue experimenting with how we can enable the cache with BuildKit in #114762, but for the time being, I think that we should just hotfix this.
This PR reverts the build backend back to the old one, which fixes the caching. However, we also have to bust the cache of all Dockerfiles, otherwise caching would only start kicking in for them the next time they are updated (or the next time GH updates their docker version). Because when the Docker version was updated the last time, the Dockerfiles were cached on S3 with basically an empty cache, and unless we bust it, even after reverting to the old build engine, the CI script would just download the empty cache and rebuild the Dockerfile from scratch, thus nullifying our fix.
r? @Mark-Simulacrum