-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make nix-build-if-changed
a first class feature of nix
#4364
Comments
cc @Mic92 |
Thanks for opening that, I also think that's a missing bit in the ecosystem. A few remarks:
Technically
I guess that would be mostly the same as
Fwiw, you can possibly be a slight bit smarter by
(But agreed, that's not getting you very far, and that's re-implementing some logic that Nix already has internally)
Most new-style commands have a
You can cache most of the evaluation by first evaluating to a derivation and then doing all the operations on the drv file itself (or use the native caching provided by flakes once they stabilize) |
Isn't there already an issue for this? |
See #3946 |
I disagree with this. nix sandboxing is very restrictive in what kind of things you can do. You shouldn’t need to adjust your tests to work inside a sandbox. For example, your tests could be doing their own sandboxing, but user namespaces (which nix uses) are not really nestable.
I don’t think so, you shouldn’t need to download the final result.
My bad, Profpatsch/nix-build-if-changed.py doesn’t do evaluation twice, but evaluates once and then runs |
This is at least most of the way fixed with #4387 |
Another edge-case I noticed in practice: if your root (the drv you are building) is e.g. a Ignoring all |
Side rant; IMO Even as an advanced user, I often got confused when seeing rebuilds in CI. And because all we get is the store path, I am now hunting down which nix code is producing those. I'd rather Nix being slow and consistent. Or at least have a clear explanation of why the rebuild is happening. Or have a This also prompted me to write this forced-cache.nix script: https://github.com/Mic92/nix-build-uncached/#packages-with-allowsubstitutes--false |
I agree, it makes "why didn't my build substitute" more complex than it needs to be. It's also a slippery slope as there's no guarantee that substituting would be slower than building. I think it's a poor workaround for #3379 |
Depends on: #4747 |
This issue has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/github-actions-and-or-with-hercules-ci/13537/2 |
#4387 was merged, so try something like nix-build --store my-remote-store --builders 'auto - - 1 1' with Nix master. |
Is there documentation for this? https://github.com/NixOS/nix/pull/4387/files doesn’t touch the manual, and it is not clear to me if this was just a bugfix, or if docs are missing. |
yeah I’m sorry @Ericson2314, but anything that is not documented in depth in the manual, I cannot in good conscience call “implemented”. This is a general gripe I have with nix, it is a culture problem (no offense intended). |
For example I stumbled over the |
This feature really deserves a simple flag for users to flip it on/off. |
@Profpatsch Yes, but my PR also didn't change the CLI :D: it simply made some existing things work where they didn't before. |
#3946 and this are the same, as far as I can tell. Should one of them be closed? |
I feel like the other one should have been closed, since this one had more discussion and a better worked out description. |
I always close the newest one, but I don't mind either way. |
This resolves the problems detailed here: NixOS#4442 The `allowSubstitute=false` "feature" wreaks havoc on CI / AWS / container type builds, because we *cannot* cache hit on AWS pkgs, and further features like `nix-build-if-changed` do not work without really ugly wrapper hacks: NixOS#4364 Seen in the wild, in addition to the above, we've hit two problems at PDT specifically: 1. User buildEnv results can't be substituted out of S3. This means that in pdtnix, we can't push them to S3 from CA and have hosts like mtsync pull them. This then results in mtsync failing, because it tries to build, but it doesn't have DTS-8 installed. 2. On AWS, there are >10,000 pkgs in texlive that say allowSubstitutes=false. These are all in S3, but if you cold start a host it will rebuild them all! Moreover, our buildPDTEnv calls are sometimes very expensive (20,000+ symlinks in turnover-tool style bundles!), so the whole notion that this is faster to build locally than substitute out of S3 is flawed. We also run pytest suites, which involve starting tests on NFS, etc., and do a bunch of `find` filtering to find tests. Per comment upstream, this is a controversial change and unlikely to be accepted upstream without more discussion (currently underway in the community), so we'll keep it internal for now.
Is your feature request related to a problem? Please describe.
When using nix on CI, I want every
nix-build
/nix-store --realize
to be a no-op if it’s already built. This means already builtOR
available on a substituter. If you run a naïvenix-build
, it will download the full run time closure, which can be many megabytes of useless networking if all you want to do is verify whether the build succeeds.One step further, if I want to generate a script with nix (e.g. a test suite), and then run it on CI (outside of the nix sandbox), it obviously needs to download the runtime closure. However, it does not have to download it if it didn’t change, because that means we ran the test before and the result won’t change (as long as all test dependencies are tracked by nix). Compare to how
bazel
handles test suites in a similar fashion.Describe the solution you'd like
This requirement is so common that multiple people have written wrappers around nix independently of each other. The ones that I know of:
I think at this point it would make sense for nix to get one or more flags to make those scripts unnecessary. (They effectively have to parse
nix-build --dry-run
output to figure out what nix would do and then evaluate a second time.)A simple
--fetch=no
/--fetch=if-changed
/--fetch=always (default)
flag tonix-build
might be enough to get most of the advantages; it would only download the resulting path of it had to do a build, and it would signal (via exit code?) whether it actually did the build and download. This might confuse multiple separate issues though, so I don’t know if that’s the best semantics.--fetch=always
: current behavior`--fetch=if-changed
: the result will only be fetched if it had to be built (aka wasn’t either in the local store or completely substitutable). This is a good way to e.g. build test scripts if you want to run them as well.--fetch=no
: the result should never be fetched. You might think “but if I have to build I will have it in my store by design”, but that’s not true if you have a--builder
configured because you don’t want to actually build things on CI. In that case, even though the builder builds the resulting path, all CI wants to know is if the store path can be successfully built, it does not need the result.Describe alternatives you've considered
nix-run
in the old-style tools and the new-style command line is experimental, plus it would hide an important piece of information (already built) and bind it to a fixed action, so it’s less composable if somebody wants to use the information for some other integration goal.cc @regnat
The text was updated successfully, but these errors were encountered: