Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to handle caching without referencing existing queries? #3505

Closed
ianstormtaylor opened this issue May 24, 2018 · 23 comments
Closed

How to handle caching without referencing existing queries? #3505

ianstormtaylor opened this issue May 24, 2018 · 23 comments
Labels
🏓 awaiting-contributor-response requires input from a contributor 🚧 in-triage Issue currently being triaged ✍️ working-as-designed

Comments

@ianstormtaylor
Copy link

For mutations that are auto-updatable using dataIdFromObject you never worry about what the original queries you need to update, since it happens under the covers.

However, for mutations that add or remove items from a list, how do you update them without referencing the original queries?

In the Mutations docs there is an example of updating the cache after an "add" mutation, but it shows passing in an existing query constant to do it. For example:

<Mutation
  mutation={ADD_TODO}
  update={(cache, { data: { addTodo } }) => {
    const { todos } = cache.readQuery({ query: GET_TODOS });
    cache.writeQuery({
      query: GET_TODOS,
      data: { todos: todos.concat([addTodo]) }
    });
  }}
>

But the GET_TODOS here is a constant.

It seems like this causes multiple issues?

Since your mutations are tightly coupled to your queries, now every time you add a new query that happens to be impacted by any existing mutation, you need to remember to go find the mutation and change its cache updating logic. You can't keep all of this in your head, so you're going to end up with cache invalidation bugs.

But there's a more subtle problem...

If you extrapolate this example, you end up having to keep constants for every query in your application. If you architect your application this way, it seems like you're forced with a choice:

  • Either you go with a "query-per-component" architecture, where you have tons and tons of query constants, one for each component, that you have to manually update depending on whether they are impacted by any "insert/delete" mutations...

  • ...or, to reduce the number of queries, you go with a "canonical query" architecture, which contain the relevant data for each object, and you're no longer retrieving exactly the fields you need. This feels like we've just re-implemented REST again?

Is there no other solution to this problem?

Or is this kind of inherent to how GraphQL and Apollo work? Since Apollo doesn't really have enough information to update all of these queries?

I might be missing something here, and if so I'd be very relieved to hear that, but this seems like a huge flaw in the argument for using Apollo in the first place? For "update" mutations its caching works perfectly, but to handle "insert/delete" mutations it requires you to give up GraphQL's major benefits?

It feels like the architecture breaks down for anything more complex than todo list demos.

@Wizyma
Copy link

Wizyma commented May 25, 2018

Hi @ianstormtaylor
Is this about local state management with ApolloClient ?
I am a bit confused about your example....
Thanks

@ianstormtaylor
Copy link
Author

Hey @Wizyma yup, this is for local cache management using apollo-client and react-apollo. The example above is excerpted from this page: https://www.apollographql.com/docs/react/essentials/mutations.html

@Wizyma
Copy link

Wizyma commented May 26, 2018

@ianstormtaylor i do not know if i use it in the wrong way, but not one single time i used the cache variable in my component.
When you build your client state management with apollo client you have several options that clientState allow us to use, such as :
default
resolvers
typeDefs

My whole application logic is in my folder that manage the state (like redux with his reducers, actions etc...)
And it is my resolvers that will handle all the mutations.
I then import my mutations and my querys when i need it.
And i actually never use the cache variable in my component, it's the resolvers that handle all the logic.

If you want i'll create an example repo for more details.

@ianstormtaylor
Copy link
Author

@Wizyma I’m not sure I understand what you mean. I’m looking for a way to update the UI after mutations (probably involving the cache) that doesn’t also require you to either (a) manually keep track of every possible query/mutation permutation which is very bug-prone, or (b) consolidate to a few query constants which returns GraphQL to a REST-like set of representations.

@Wizyma
Copy link

Wizyma commented May 27, 2018

@ianstormtaylor i see so which alternative do we have ?
I think it would be nice if it worked like graphql itself when you make a mutation and you can ask wich data you want to be returned after the mutation.

@ianstormtaylor
Copy link
Author

Anyone from the Apollo team have insight into this? This feels like a really core issue that isn't solved right now...

@riccoski
Copy link

How it works seems to make sense, how would it know what to update, what would be your solution?
I'm curious -

whilst typing I guess there could be something like the Cache redirects ,- where you can link a cache update along with a typename + @insert or @delete directive.
But I think lots of people would have different ways they feel this should be maybe that's why it's manual

@ianstormtaylor
Copy link
Author

@riccoski I'm unsure what the solution is. My point is kind of that it seems like an intractable issue, that eliminates one of the major selling points of GraphQL in the first place. For the moment at least, it made me not use Apollo/GraphQL.


cc @stubailo any thoughts here from anyone at Apollo?

@Aetherall
Copy link

Aetherall commented Jun 16, 2018

This is my main issue since months, with as the result an abusive use of refetchQueries .....

@ianstormtaylor
Copy link
Author

@stubailo @peggyrayzis or anyone from Apollo care to comment? This feels like a kind of dirty secret that Apollo can't really solve but that isn't admitted to in marketing materials.

@rodrigo-brito
Copy link

I have the same problem here. In the moment I'm refetching the queries after each mutations. I discovered some time ago that we can use only the query name for refetch, without any param. The Apollo uses the last param value for refetch. It help me a lot.

@souserge
Copy link

souserge commented Oct 4, 2018

The problem isn't specific to either Apollo or GraphQL – cache management is hard, and there's little one can do about it. I'm also struggling with it, but I have a few ideas on how to structure caching updates. Currently, i'm using apollo-link-watched-mutation to keep the update logic in one place. Here are some tips to make your life simpler:

  • If you can influence the schema definition, make sure to disallow nulls wherever you can (especially in nested/non-primitive fields and lists). Less nulls – less things to keep track of and worry about;
  • Always query for ids. As mentioned in other comments, caching problems arise only on adding/deleting objects from lists, since Apollo Client handles updates automatically in most cases, provided that you always query for an id field and the affected fields after every mutation;
  • To be sure that after mutations you query for all the fields necessary for updating the cached values, use GraphQL's fragments;
  • Try to write generic update functions that can be easily reused. In my case, I achieved this by heavily using higher-order functions, plus Ramda and its implementation of a pattern of getting/setting properties of objects called "Lenses". My solution is still far from ideal, but so far it has simplified cache management quite a lot.

P.S. I'm still quite frustrated why two identical queries cannot be cached together: for example, I have two queries as:

{
  users {
    id
    name
    email
  }
}
{
  users {
    id
    name
  }
}

If a mutation adds an entry to one list of users and the cache is updated accordingly, the entry will not be added to the cache of the second list of users. The caches should be updated separately, although, as far as I know, there's no situation in which I might need to update one and not the other. Can someone explain me why this is not handled automatically?

@wmertens
Copy link
Contributor

The feature request at apollographql/apollo-feature-requests#62 would cover this, right?

@f1ztech
Copy link

f1ztech commented Nov 12, 2018

+1
The idea of normalization was that there will be only one place in store where all object values are stored, in apollo it is not true. So you should again think about all components in application which your update will touch, like it was before redux and normalization.

For example you have a list of foos, and below a form where you can view/edit details of one foo.

In classic redux you will have a slice in store where all foos are normalized and stored. When editing complete you update foo in this slice, and data gets automatically updated in list (because list uses the same foo slice of store).

With apollo you will have two queries (list and detail) with independent cache, after update of foo data will be updated only in detail, to update list you have to call refetch or update cache manually using apollo hooks.

@ecerroni
Copy link
Contributor

I am the author of the apollo-cache-updater:

https://github.com/ecerroni/apollo-cache-updater

It's not perfect but it lower the boilerplate needed to update the cache in complex scenarios.

I also am constantly facing your same problem. I built the package to buy time until there is an official solution for in-place updates when adding/removing items, too.

@stalniy
Copy link

stalniy commented Mar 4, 2019

Guys, I created a new feature suggestion and I think this may be the good starting point: apollographql/apollo-feature-requests#97

Please check

@benjamn
Copy link
Member

benjamn commented May 1, 2019

@ianstormtaylor I think you're missing a small nuance that makes a big difference here: the query used by the mutation update function does not have to be the original GET_TODOS query.

For reference, here's the original GET_TODOS query along with its usage by the <Query> component:

const GET_TODOS = gql`
  {
    todos {
      id
      type
    }
  }
`;

const Todos = () => <Query query={GET_TODOS}>...</Query>;

Although you could implement the update function of the mutation using GET_TODOS, you absolutely do not have to! In this example (working implementation), both GET_TODOS and the new/simpler TODOS query refer to the same logical data in the cache, so the mutation works without any knowledge of the GET_TODOS query:

const TODOS = gql`{ todos { id } }`;
const AddTodo = () => (
  <Mutation
    mutation={ADD_TODO}
    update={(cache, { data: { addTodo } }) => {
      const { todos } = client.readQuery({ query: TODOS });
      client.writeQuery({
        query: TODOS,
        data: { todos: todos.concat([addTodo]) }
      });
    }}
  >...</Mutation>
);

You might be wondering why the mutation needs to use a query at all. Wouldn't it be nice if you could just push the new to-do object onto an array somewhere, and the UI would update automatically? Well, where would that array live? How would you get access to it from both the query and the mutation?

That's why the mutation needs the TODOS query (or GET_TODOS, if you like): because it's a convenient way to read the todos array out of the cache, and then write a new array back in its place. There are other ways of reading from and writing to the cache (such as readFragment and writeData), but I think using a query here makes the most sense because that's how the other querying logic works.

In many cases, your queries and mutations will naturally be coupled/colocated close enough to each other that you might as well reuse the original query to implement the mutation, but I hope this example convinces you that such close coupling is entirely optional.

@stalniy
Copy link

stalniy commented May 2, 2019

@benjamn you showed a simple use case where Mutation doesn’t need access to query’s variables in order to update the cache.

As soon as you need to update cache based on variables the implementation becomes tricky. Especially when you need to update cache of N queries

@jbaxleyiii jbaxleyiii added the 🚧 in-triage Issue currently being triaged label Jul 9, 2019
@jbaxleyiii
Copy link
Contributor

Thanks for reporting this. There hasn't been any activity here in quite some time, so we'll close this issue for now. If this is still a problem (using a modern version of Apollo Client), please let us know. Thanks!

@hinok
Copy link

hinok commented Jul 17, 2019

@jbaxleyiii Yes, it's still a problem for us.

@gbouteiller
Copy link

@jbaxleyiii As @hinok mentioned, we have mostly workarounds to make our apps deal with that specifically but it would be good to have either this kind of stuff handled by Apollo or some best practices to refer to ( for example, how to properly manage the deletion of one item when you have three components with a filtered list (so query's variables ), a paginated list(query's variables or connection? ) and a classic one (no variables)). We would also really appreciate your thoughts as a GraphQL expert. Thank you.

@Renaud009
Copy link

The initial question has entirely been answered by @benjamn . Whatever query you pass as readQuery and writeQuery param, same entities based on __typename and will be updated in the cache without regardsto how many actual queries on these entities have been used.

@eric-burel
Copy link

eric-burel commented Sep 26, 2019

@benjamn Your example works fine but only in the absence of variables in the query.

What would be needed in the update function is an API to get a list of queries based on their name. For example if I display 2 lists, with different params, but with the same underlying listFoos query, I'd like to be able to get those cached queries and their variables in the update function.
This way I'd be able to update their result eagerly and return.

This is also necessary for updating after a server side render. The Query is correctly cached but you have almost no way to tell what was it's argument client side.

That's a bit how refetchQueries works when you pass an array of strings but it implies a call to the server, while update doesn't and is the way to go for optimistic UI.

See: VulcanJS/Vulcan#2381. We have much trouble to implement a robust optimistic UI in Vulcan due to the very dynamic nature of our queries.

Edit: this how I imagine the API, eg for a deletion mutation affecting a customer. Say the app have also run a few "customers" queries to get customers, but with different names.
Basically a findQueries method in apollo-cache-inmemory would do the trick. I am trying to read the code but I don't find such a method, I don't know if its realistic, but something like this would actually allow us to handle optimistic updating of queries with complex dynamic variables.
Also sorry if that's already a feature request, I've browser like dozens of related issues so I am starting to get a bit lost.

update: (cache, { data: removedDoc) => { 
    const queries = cache.findQueries('customers');
    queries.forEach(({query, variables}) => {
         if (variablesAreRelevant(variables, removedDoc)){
              const data = cache.readQuery({query, variables});
              const newData = removeFromData(data, removedDoc)
              cache.writeQuery({ query, variables, data: newData})
         }
    })
}

Related: apollographql/react-apollo#708 (comment)

Second edit: I've tested the implementation proposed by @ngryman in the link above successfully with apollo-client 2.6.3 and react-apollo 3 for a delete mutation. Apollo-link-watch-mutation did not work as expected on the same use case. I did not test apollo-cache-updater, but the logic is similar, that's an interesting package.

This indeed solve the case where we have the same queries multiple times but with dynamic variables. (I insist on dynamic variables, a query with no variable is an absolutely different problem).
That would be great to see a similar feature in the official apollo-client or cache.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
🏓 awaiting-contributor-response requires input from a contributor 🚧 in-triage Issue currently being triaged ✍️ working-as-designed
Projects
None yet
Development

No branches or pull requests