Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Test that cache evictions propagate to parent queries.
When an object is evicted from the cache, common intuition says that any dangling references to that object should be proactively removed from elsewhere in the cache. Thankfully, this intuition is misguided, because a much simpler and more efficient approach to handling dangling references is already possible, without requiring any new cache features. As the tests added in this commit demonstrate, the cleanup of dangling references can be postponed until the next time the affected fields are read from the cache, simply by defining a custom read function that performs any necessary cleanup, in whatever way makes sense for the logic of the particular field. This lazy approach is vastly more efficient than scanning the entire cache for dangling references would be, because it kicks in only for fields you actually care about, the next time you ask for their values. For example, you might have a list of references that should be filtered to exclude the dangling ones, or you might want the dangling references to be nullified in place (without filtering), or you might have a single reference that should default to something else if it becomes invalid. All of these options are matters of application-level logic, so the cache cannot choose the right default strategy in all cases. By default, references are left untouched unless you define custom logic to do something else. It may actually be unwise/destructive to remove dangling references from the cache, because the evicted data could always be written back into the cache at some later time, restoring the validity of the references. Since eviction is not necessarily final, dangling references should be preserved by default after eviction, and filtered out just in time to keep them from causing problems. And even if you ultimately decide to prune the dangling references, proactively removing them is way more work than letting a read function handle them on-demand. This system works because the result caching system tracks hierarchical field dependencies in a way that causes read functions to be reinvoked any time the field in question is affected by updates to the cache, even if the changes are nested many layers deep within the field. It also helps that custom read functions are consistently invoked for a given field any time that field is read from the cache, so you don't have to worry about dangling references leaking out by other means. I recommend reading through this test not only because it demonstrates important capabilities of InMemoryCache, but also because the mythological subject matter contains some good jokes, IMHO.
- Loading branch information