-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid copying entire cache on each optimistic read. #4319
Avoid copying entire cache on each optimistic read. #4319
Conversation
return this; | ||
} | ||
|
||
public toObject(): NormalizedCacheObject { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seemed important to implement the toObject
method correctly, but one big benefit of this PR is that toObject
should no longer be called in the course of normal InMemoryCache
usage. Maybe if you happen to call cache.extract(true)
while optimistic updates are active, but only then.
return null; | ||
} | ||
|
||
const store = | ||
query.optimistic && this.optimistic.length | ||
? new ObjectCache(this.extract(true)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line was previously really expensive.
@@ -174,13 +203,8 @@ export class InMemoryCache extends ApolloCache<NormalizedCacheObject> { | |||
} | |||
|
|||
public diff<T>(query: Cache.DiffOptions): Cache.DiffResult<T> { | |||
const store = | |||
query.optimistic && this.optimistic.length | |||
? new ObjectCache(this.extract(true)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line was previously really expensive, too.
|
||
// Re-run all of our optimistic data actions on top of one another. | ||
toPerform.forEach(change => { | ||
this.recordOptimisticTransaction(change.transaction, change.id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every single time we called this.recordOptimisticTransaction
here, the entire cache was copied.
@@ -1,23 +1,28 @@ | |||
import { NormalizedCache, NormalizedCacheObject, StoreObject } from './types'; | |||
|
|||
export class ObjectCache implements NormalizedCache { | |||
constructor(private data: NormalizedCacheObject = Object.create(null)) {} | |||
public toObject(): NormalizedCacheObject { | |||
constructor(protected data: NormalizedCacheObject = Object.create(null)) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only functional change in this module was making data
a protected
rather than private
member, since we need to access it in the OptimisticCacheLayer
subclass.
import { NormalizedCacheObject } from '../types'; | ||
|
||
describe('RecordingCache', () => { | ||
describe('OptimisticCacheLayer', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was tempted just to git rm src/__tests__/recordingCache.ts
, since src/recordingCache.ts
no longer exists, but then I realized these tests could be adapted to use the new OptimisticCacheLayer
class instead. Similar in spirit, though very different in implementation.
@@ -8,5 +8,4 @@ export * from './readFromStore'; | |||
export * from './writeToStore'; | |||
export * from './fragmentMatcher'; | |||
export * from './objectCache'; | |||
export * from './recordingCache'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I very much doubt anyone was using the RecordingCache
outside of the tests in this package, though I suppose this could be considered a minor breaking change.
SO PUMPED ABOUT THIS!!!! |
Previously, the `InMemoryCache` maintained an array of recorded optimistic updates, which it would merge together into an entirely new `ObjectCache` whenever performing any single optimistic read. This merging behavior was wasteful, but the real performance killer was calling `this.extract(true)` each time, which copied the entire underlying (non-optimistic) cache, just to create a throw-away `ObjectCache` for the duration of the optimistic read. Worse still, `this.extract(true)` was also called in `recordOptimisticTransaction`, to create a throw-away `RecordingCache` object. Instead of creating temporary copies of the entire cache, `InMemoryCache` now maintains a linked list of `OptimisticCacheLayer` objects, which extend `ObjectCache` and implement the `NormalizedCache` interface, but cleverly delegate to a parent cache for any missing properties. This delegation happens simply by calling `this.parent.get(dataId)`, so there is no need to extract/copy the parent cache. When no optimistic data are currently active, `cache.optimisticData` will be the same (`===`) as `cache.data`, which means there are no additional layers on top of the actual data. Lookup time is proportional to the number of `OptimisticCacheLayer` objects in the linked list, so it's best if that list remains reasonably short, but at least that's something the application developer can control. Calling `removeOptimistic(id)` removes all `OptimisticCacheLayer` objects with the given `id`, and then reapplies the remaining layers by re-running their transaction functions. These improvements probably would have made the excessive memory usage reported in #4210 much less severe, though disabling dependency tracking for optimistic reads (the previous solution) still seems like a good idea.
…thods. Once we stopped calling this.transformDocument in these method implementations, they became character-for-character identical to the implementations already provided by the ApolloCache superclass, so we can simply inherit them as-is.
2658796
to
4a87022
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 👍 👍 @benjamn - so great! L(immediately)GTM!
This has been released in |
Previously, the
InMemoryCache
maintained an array of recorded optimistic updates, which it would merge together into an entirely newObjectCache
whenever performing any single optimistic read.This merging behavior was wasteful, but the real performance killer was calling
this.extract(true)
each time, which copied the entire underlying (non-optimistic) cache, just to create a throw-awayObjectCache
for the duration of the optimistic read. Worse still,this.extract(true)
was also called inrecordOptimisticTransaction
, to create a throw-awayRecordingCache
object.Instead of creating temporary copies of the entire cache,
InMemoryCache
now maintains a linked list ofOptimisticCacheLayer
objects, which extendObjectCache
and implement theNormalizedCache
interface, but cleverly delegate to a parent cache for any missing properties. This delegation happens simply by callingthis.parent.get(dataId)
, so there is no need to extract/copy the parent cache.When no optimistic data are currently active,
cache.optimisticData === cache.data
, which means there are no additional layers on top of the actual data. Lookup time is proportional to the number ofOptimisticCacheLayer
objects in the linked list, so it's best if that list remains reasonably short, but at least that's something the application developer can control.Calling
removeOptimistic(id)
removes allOptimisticCacheLayer
objects with the givenid
, and then reapplies the remaining layers by re-running their transaction functions.These improvements probably would have made the excessive memory usage reported in #4210 much less severe, though disabling dependency tracking for optimistic reads (the previous solution) still seems like a good idea.