-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize closure copying for high-latency remotes #5026
Comments
I would also like to try out QUIC some day so we can send files 1-1 in parallel streams without head of line blocking too. In any event, we might want to allow stores that don't care about references being intact to avoid doing the transfer in (partial) order. |
QUIC helps with cheaper handshakes and on packet losses (the head-of-line problem), but I expect neither is really significant in practice (for nix remotes), and we suffer mainly from those deep dependency graphs and the way we work with them. |
Maybe we could have some sort of batching implemented? I am thinking something along the lines of gateway over s3 bucket which would add required batching operations (I don't think s3 on its own is flexible enough) Also, may I add that additional functionality like permission control as part of this gateway could be added in the future. (I think feature as such would be neat) |
The issue with copying tons of small store paths over a high-latency link is long pause before copying. |
Currently copying closures with lots of small files (like .drv closures or NixOS system closures) is slow because every store path is copied separately and requires a round-trip to the remote. Sending the entire set of missing paths in a single call (like with the old
nix-store --export
) is much faster.The text was updated successfully, but these errors were encountered: