-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have cache.modify
functions receive options.args
#7129
Comments
Here's my latest thinking (a workaround that works today, and avoids the need to store unused |
Why such an aversion to storing args though? They won't take up much space compared to data as far as I can imagine. That workaround takes extra trouble to store the args ourselves, and feels like a duct tape solution instead of something systematic and elegant. I'd really rather the cache store args automatically in case we need to use them |
@jedwards1211 Since I first wrote the comment that you quoted above (in support of If you're paginating the field, for example, the last arguments (say, |
Actually I made a mistake in my last comment suggesting that I would need page range from args. Relay style pagination returns the page range in query data and my Apollo 2 updaters already have to update the page range in cached query data accordingly. It's search and sort order I would want from args for handling creates and deletes, and when those change I have to throw away the cached list and start from scratch. But maybe I should echo the search and sort args in the query results instead of only the page info relay prescribes... So basically all list fields in my app are either relay-style paginated or not paginated at all, hence all args besides the relay pagination parameters (which are stored in data anyway) remain relevant. So it would be super handy to automatically have those last args. I guess for people who don't want to use relay style pagination... Simpler pagination schemes seem like an uphill battle (for example if you need to paginate backwards if the user scrolls too far down and you have to evict the head of the list, you're hosed if you just use cursor/limit pagination) |
Yes, many simpler pagination schemes sacrifice the ability to paginate backwards, and might skip elements if the list is modified between page requests. Relay-style pagination attempts to solve these problems with connections and edges and cursors, but it comes at a steep cost: if you're using Relay, every single paginated field must follow their pagination specification, even if you don't need the extra complexity in some cases, or you need additional/different functionality not present in that specification. In AC3, we've been able to abstract away the details of Relay-style pagination with the Whether you're updating data manually or using |
That's cool, I'm looking forward to using the
Imagine you fetch the first 10 users (out of millions) sorted by name, and those users just happen to have ages of 12, 17, 23, 35, 41, 58, etc. Then you switch to sorting by age. Tons of users go between each of those when sorting by age, and you can't know a priori where the gaps are, so you have to either fetch the first 10 users by age, or pick one of those cached ages as your start cursor. And then you'll end up refetching the old cached users anyway as you scroll. |
I see the point about the last args potentially misleading people depending on the use case...I guess I wish I could define a blanket policy that acts on all fields to stash the last args, instead of having to do it for each list field individually |
This is an issue for far more than pagination, it impacts invalidation and modification on ROOT_QUERIES with keyArgs. The fact that the details passed to For example, imagine a query which has 4-5 optional variables which are all used a keyargs. There may be hundred or more in the cache (think typeahead), with a huge number of potential combinations. If you add an item via a mutation, there is no easy way to search through the cache to determine which queries you need to modify because you have NO access to the keyargs. You literally would have to guess every possible combination and check if it exists in the cache.modify function. I appologize for any errors in the code below. As I am typing it directly here. query GetItems(
$campaignId: ID
$type: String
$keyword: String
$targetType: String
) {
getItems(
campaignId: $campaignId
type: $type
keyword: $keyword
targetType: $targetType
) {
id
name
type
}
} Assume all of these variables are used as keyArgs. Every time a user updates keyword, it will create a new cache entry, or if they change the type, etc. Assume I add a new item of type "Foo" to campaign id "1". That that means I need to figure out which GetItems queries have a For example this update function using cache.modify, doesn't really leave me a great way to figure out which of the getItems on the root query I really care about. update: (cache, result) => {
cache.modify({
id: 'ROOT_QUERY',
fields: {
getItems(value, details) {
// Returns every query, but no access to keyArgs
console.log(details.storeFieldName) // getFields{"type":"Bar", keyword: "f"}
}
}
}) What would be better update: (cache, result) => {
cache.modify({
id: 'ROOT_QUERY',
fields: {
getItems(value, details) {
// allow checking the keyArgs
if (details.keyArgs.type === result.type) {
// do update on fields where the type matches the mutation result
// the other args are irrelevant (keyword, campaign, etc)
}
// return other values, they don't need to be modified
return value
}
}
}) This also extends to I use this hacky helper function frequently, to determine which queries I need to evict. The main "trick" here is the helper parses the cache key (which has the key args included) into an object containing the field name and key args. The filter function is then invoked with this new object. This helper could be removed if there was a way to get at the keyArgs for each field in the cache other than parsing the storeFieldName string. update: (cache, result) => {
invalidateApolloCacheFor(cache, (field, keyArgs) => {
return field === 'getItems' && keyArgs.type === result.type
});
} export const invalidateApolloCacheFor = (
cache: ApolloCache<any>,
fieldAndArgsTest: FieldAndArgsTest) => {
// Extracts all keys on the root query
const rootQueryKeys = Object.keys(cache.extract().ROOT_QUERY);
const itemsToEvict = rootQueryKeys
.map(key => extractFieldNameAndArgs(key))
.filter(r => fieldAndArgsTest(r));
itemsToEvict.forEach(({ fieldName, args }) => {
cache.evict(
{
id: 'ROOT_QUERY',
fieldName,
args
}
);
});
};
export const extractFieldNameAndArgs = (key: string) => {
if (!key.includes(':')) {
return { fieldName: key, args: null };
}
const seperatorIndex = key.indexOf(':');
const fieldName = key.slice(0, seperatorIndex);
const args = convertKeyArgs(key);
return { fieldName, args };
};
// Convert the keyArgs stored as a string in the query key
// to an object.
const convertKeyArgs = (key: string): Record<string, any> => {
const seperatorIndex = key.indexOf(':');
const keyArgs = key.slice(seperatorIndex + 1);
// @connection directives wrap the keyArgs in ()
// TODO: Remove when legacy @connection directives are removed
const isLegacyArgs = keyArgs?.startsWith('(') && keyArgs.endsWith(')');
const toParse = isLegacyArgs ? keyArgs.slice(1, keyArgs.length - 1) : keyArgs;
// We should have a string here that can be parsed to JSON, or null
// getSafe is an internal helper function that wraps a try catch
const args = getSafe(() => JSON.parse(toParse), null);
return args;
}; |
It would be great to have the solution that @raysuelzer outlined above. I have the same problem when updating the cache for parameterized fields. I'm surprised more Apollo users aren't complaining about this, since it should be an obstacle for GraphQL server with parameterized fields. My current workaround is to use |
FYI in case helpful - there is actually a way to use
So, pretty much, you simply need to return the original cached value when
Hope this helps. |
Thanks in advance. My question is related to cache.modify--- How can I modify 2 different fields inside one cache.modify ? |
It resolved after apollo 3.5 |
Hi all, I'm doing some housekeeping and am curious as to whether you agree with @taejs's comment RE: version 3.5 using |
No, |
Thanks for the quick response @mgummelt and thanks all for your patience! As mentioned above, please also see the open feature request: apollographql/apollo-feature-requests#259. For transparency, the maintainers will not be prioritizing this item in the near future but we do want to keep this on our radar! |
Hey I had the same issue some time ago but implemented a solution which has helped me so far and seems to work well, so I'll post it here if it helps anyone. Note: the one caveat is that I need to be able to get my input args wherever I use this but that hasn't been an issue for me I just hash the keyArgs in my type policies when I need it on specific fields
Then in my cache modify whenever I need to use it I just rebuild the key and this allows me to get the data I need or perform operations specific to that dataset and return the structure accordingly
We use namespaces (ie. user) but this should be enough for you to repurpose for your use cases. Hope this helps as I remember struggling on this for a lonnng time a year or so ago |
I ended up using @ayarmak solution (thx). |
I'm still waiting for a more versatile and worry-free approach on this.. I may be greedy but it'll be a game changer for sure. Currently, for multiple complex parameterized queries, relying on subscription / refetching seems to be the better way to ensure maximum consistency of the data with (much) less effort.
The problem with updateQuery, or filtering by storeFieldName is that we'll have to modify the filtering logic / updateQuery variable frequently depending on the number of parameterized queries. |
It would be very helpful if
cache.modify
receivedoptions.args
, similar to whatread
andmerge
functions do.A current case that we come across, is that we have large lists of items in our cache, which are under the same field name but queried with a status argument/variable. When creating a new item, we currently query not only the resulting new item, but the lists as well. To increase speed and reduce over-fetching, we changed the mutation to only query the newly created object itself, and manually add it to several lists using
cache.modify
. However, the new item should only be appended to the field values which correspond to certain arguments. To do this, we now rely on parsing ofstoreFieldName
strings: a much preferred solution would be to directly work with the arguments, but for that they would have to be provided throughoptions.args
.Since @benjamn extensively described some of the practical implications in a previous comment, I've included this:
Originally posted by @benjamn in #6289 (comment)
The text was updated successfully, but these errors were encountered: