Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] useQuery race-condition bug #10587

Closed
hakonxxx opened this issue Feb 21, 2023 · 12 comments
Closed

[BUG] useQuery race-condition bug #10587

hakonxxx opened this issue Feb 21, 2023 · 12 comments
Assignees
Labels
🏓 awaiting-team-response requires input from the apollo team 🐞 bug ✔ confirmed 🔍 investigate Investigate further

Comments

@hakonxxx
Copy link

useQuery race condition bug, not update cache when one of multi queries done which refer same cacheId.

Root Cause

maybe is the root cause of #10359

useQuery-race-condition-bug-when-multi-queries-refer-same-cacheid

Reproduction

Suggested

const typePolicies = {
  SchemaTest: { merge: true }
}
type SchemaTest = {
  a: any
  b: any
  c: any
  d: any
}
const doc1 = gql`query get1() {
  schemaTest {
   a
   b
 }
}`
const doc2 = gql`query get1() {
  schemaTest {
   a
   c
 }
}`
const doc3 = gql`query get1() {
  schemaTest {
   b
   d
 }
}`

Usage

More easily to reproduce if fill cache with partial mock schemaTest data when ApolloClient initiation.

// assume fetchPolicy is 'cache-and-network'
const useReproduction = () => {
  useQuery(doc1)
  useQuery(doc2)
  useQuery(doc3)
}

Expected Behavior

After all requests done, the cache should be the merge of doc1, doc2 and doc3.

Actual Behavior

After all requests done, the cache could be any combinations of doc1/doc2/doc3.

Versions

@apollo/client: 3.6.9

@jerelmiller
Copy link
Member

Hey @hakonxxx 👋

I tried to reproduce this as best I could given the same @apollo/client version and was unable to do so. Check out this codesandbox for a reproduction.

I made an effort to:

  • Use the same type names
  • Use the same type policy
  • Name the GraphQL queries using the same operation name for the 3 queries
  • Use 3 useQuery hooks in the same component.
  • Try and simulate the race condition by randomizing the time it takes to execute each request

As you can see, the cache appears to merge all the fields correctly, even as requests complete over time.

Would you be willing to try and provide a runnable example that demonstrates failing behavior? Given the information I have here, I'm not sure I have enough to go on to figure out what might be happening.

@hakonxxx
Copy link
Author

Hi @jerelmiller , thx for reply~~

i have fork a new sandbox, changes:

  1. new InMemoryCache.restore a mock data, which is different from resolver: { ROOT_QUERY: { __typename: "Query", schemaType: { __typename: "SchemaType", a: "a", b: "", c: "", d: "" } } }
  2. documents have overlap fields, for example Record<'a' | 'b', string> and Record<'a' | 'b' | 'c', string>

refresh the page several times, and you will see some situations like:
Screenshot 2023-02-22 at 16 05 51
the cache is expected to be { __typename: "SchemaType", a: "a", b: "b", c: "c", d: "d" } right?

BTW:

  1. once markResult would broadcastWatches which call queryInfo.setDiff
  2. but only trigger queryInfo.notify(which would finally call reobserveCacheFirst) when the oldDiff is diff from newDiff
    so i mock a different cache data when init memoryCache instance.

@hakonxxx
Copy link
Author

I doubt the usage of multi queries with overlap fields, but sometimes we just do it indeed :(

So back to this case, maybe there are better solutions instead of just discarding the maybe-staled response(what a waste and may confuse users and FE), maybe we could partially update the cache?

Also there is bug even just discarding the response(sometimes we discarded, sometimes not), because of setTimeout.

@phryneas
Copy link
Member

Adding a Replay of the problem happening here.

@phryneas phryneas self-assigned this Feb 22, 2023
@dylanwulf
Copy link
Contributor

@hakonxxx not sure if this is helpful, but your reproduction bug seems to get fixed if you change nextFetchPolicy from cache-and-network to cache-first.

@hakonxxx
Copy link
Author

@hakonxxx not sure if this is helpful, but your reproduction bug seems to get fixed if you change nextFetchPolicy from cache-and-network to cache-first.

Yes:

  1. queryManager.fetchQueryObservable would call queryInfo.observableQuery.applyNextFetchPolicy sync after queryManager.fetchQueryByPolicy and the observableQuery.fetchPolicy changed to cache-first
  2. when cache broadcast watches, the queryInfo.notify would not trigger listeners, because queryInfo.shouldNotify would check current fetchPolicy
  3. which means not trigger reobserveCacheFirst , not discard the inflight-response

So the behavior seems more weird...

@phryneas
Copy link
Member

this.concast.removeObserver(this.observer);

This here is the culprit.

We have a chain of
[incoming result]
-> QueryInfo.markResult
-> ...
-> QueryInfo.setDiff
-> [ for other impacted queries ] notify
-> queryInfo.oqListener
-> ObservableQuery.reobserveCacheFirst
-> ObservableQuery.reobserve
-> removing the previous observable - that still waits for the network result
-> killing the concast for that
-> asyncMap will never run into the getResultsFromLink mapping function

image

That said, having identified the reason and knowing how to fix it are two different things. I'm gonna dig a bit deeper.

@hakonxxx
Copy link
Author

I think two things need to consider:

  1. maybe contributors could discuss about what behavior is expected here
  2. setTimeout make the behaviors random

@phryneas
Copy link
Member

Oh, we'll definitely be discussing this internally to see what's going on, this is an excellent reproduction of a race condition that should (as far as I'm concerned) not be happening.

Thank you so much for this reproduction!

@hakonxxx
Copy link
Author

hakonxxx commented Apr 7, 2023

seems @3.7.11 fix this bug codesandbox, appreciated~ cc @phryneas

@bignimbus
Copy link
Contributor

Thanks @hakonxxx 🙏🏻 closing this as resolved 🚀

@github-actions
Copy link
Contributor

github-actions bot commented May 8, 2023

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
For general questions, we recommend using StackOverflow or our discord server.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 8, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
🏓 awaiting-team-response requires input from the apollo team 🐞 bug ✔ confirmed 🔍 investigate Investigate further
Projects
None yet
Development

No branches or pull requests

5 participants