-
-
Notifications
You must be signed in to change notification settings - Fork 652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark nodes "dirty" during invalidation, rather than deleting them #4558
Comments
Relates to #3365. |
Also relates to #4394... it's possible that implementing this issue would make fixing that one unnecessary. |
For me, this maps onto how |
Actually, we never reuse them currently... they're always cleared if we toggle back to an older version. They're mostly stable for the purposes of symlinks into |
Assigned to @jsirois |
@jsirois : One thing I hadn't thought of (and which you may now have thought of): we currently don't account for equality of I think that in the case of un-dirtying a EDIT: An update on this: I added a |
(this may also be exacerbated by #4394 tho) |
Since we can't have two copies of the same Node (ie, with the same NodeKey), I think that the worst case is that "all of the nodes that represent the |
ah, gotcha |
#4394 was less of a solution than we thought, so this is now hindering the efficiency of Going to switch to this to see if I can knock it out today. |
### Problem #4558 introduces a new state for a `Node`: it may be `dirty`, which requires that we remember the previous value of a `Node` while simultaneously computing its next value. I began implementing this state machine with our existing `future::Shared` usage, but ran into the need to either hide values in the closure of the running future, or store values atomically outside of the future's value. This complexity represents a classic case for a state machine. Additionally, we have seen that `future::Shared` has a relatively high cost in terms of complexity and memory (see #5990). ### Solution In order to make #4558 easier to implement correctly (and to reduce memory usage), implement the storage of a `Node`'s value in the `Graph` via a state machine. One of the largest complexities of `Shared` is that it needs to elect one of the waiters of the `Shared` future as the one who will actually do the work of `poll`ing the wrapped future. To avoid that complexity, we introduce usage of the `tokio` `Runtime` for _all_ `Node` executions. This allows waiters to stay very simple, and receive the result value via a `oneshot::channel`. ### Result #4558 should be much easier to implement. I'll likely wait to land this until #4558 is further along. This change reduces memory usage about 200MB further than the fix from #5990 (down to about 2.5GB for `./pants list ::` in Twitter's repo). Unfortunately, in its current form it also represents a performance regression of about 10% for that same usecase. Although I believe I have a handle on why, I'd like to defer fixing that regression until after #4558 is implemented. This also fixes #3695, as since we now fail slowly, we are able to render multiple distinct failures at once.
…nged (#6059) ### Problem As described in #4558, we currently completely delete `Node`s from the `Graph` when their inputs have changed. One concrete case where this is problematic is that all `Snapshots` in the graph end up with a dependency on the `scandir` outputs of all of their parent directories, because we need to expand symlinks recursively from the root when consuming a `Path` (in order to see whether any path component on the way down is a symlink). This means that changes anywhere above a `Snapshot` invalidate that `Snapshot`, and changes at the root of the repo invalidate _all_ `Snapshots` (although 99% of the syscalls they depend on are not invalidated, having no dependencies of their own). But this case is just one of many cases affected by the current implementation: there are many other times where we re-compute more than we should due to the current `Node` invalidation strategy. ### Solution Implement node "dirtying", as described on #4558. There are a few components to this work: * In addition to being `Entry::clear`ed (which will force a `Node` to re-run), a `Node` may be `Entry::dirty`ed. A "dirty" `Node` is eligible to be "cleaned" if its dependencies have not changed since it was dirtied. * Each `Node` records a `Generation` value that acts as proxy for "my output has changed". The `Node` locally maintains this value, and when a Node re-runs for any reason (either due to being `dirtied` or `cleared`), it compares its new output value to its old output value to determine whether to increment the `Generation`. * Each `Node` records the `Generation` values of the dependencies that it used to run, at the point when it runs. When a dirtied `Node` is deciding whether to re-run, it compares the previous generation values of its dependencies to their current dependency values: if they are equal, then the `Node` can be "cleaned": ie, its previous value can be used without re-running it. This patch also expands the testing of `Graph` to differentiate dirtying a `Node` from clearing it, and confirms that the correct `Nodes` re-run in each of those cases. ### Result Cleaning all `Nodes` involved in `./pants list ::` after touching `pants.ini` completes 6 times faster than recomputing them from scratch (56 seconds vs 336 seconds in our repository). More gains are likely possible by implementing the performance improvement(s) described on #6013. Fixes #4558 and fixes #4394.
After a change event and during invalidation, we transitively delete
Nodes
from the graph. But rather than deleting, we should markNodes
"dirty" instead.As describe in the
bazel
/skyframe
design, and in theadapton
design, marking aNode
dirty allows it to be "resurrected" without re-running if all of its inputs are subsequently marked clean with the same values they had before.This is an extremely efficient cache of the previous execution, because it can happen purely in memory.
The text was updated successfully, but these errors were encountered: