You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In our organization, we are planning to encourage the use of fragments among individual teams so they can independently manage their own data requirements without affecting the main host app or other teams.
However, we discovered that the number of fragments has a significant impact on the site speed, specifically time-to-interactive. Through performance profiling, we are pretty certain that it is caused by two factors:
The identify function to obtain the cache ID (our GraphQL types have non-ID primary key fields)
The in memory cache’s diffQueryAgainstStore function.
We created a sample app to help reproduce the issue. Please refer to the Link to Reproduction.
Thanks for reporting this @hanlindev! The reproduction and thorough analysis on your end really helps. Thanks so much!
I'm hoping we can consider this for 3.10 as we'll be doing some fragment related work anyways. No guarantees, but I will certainly discuss it with the team. Thanks!
@hanlindevuseFragment without data masking will be definitely an overhead. The repro doesn't seem to use @nonreactive directive, but I suspect the result will be similar even with it.
We did see similar problems in our project and introduced a custom @mask directive as a direct replacement for @nonreactive. It introduces a proper masking and minimizes the overhead of useFragment. I've opened a PR as a reference implementation of @mask directive, maybe you can try it with your repro?
Issue Description
In our organization, we are planning to encourage the use of fragments among individual teams so they can independently manage their own data requirements without affecting the main host app or other teams.
However, we discovered that the number of fragments has a significant impact on the site speed, specifically time-to-interactive. Through performance profiling, we are pretty certain that it is caused by two factors:
We created a sample app to help reproduce the issue. Please refer to the Link to Reproduction.
The question was originally posted on the community forum, creating this issue so the Apollo team can track this.
Link to Reproduction
https://github.com/jashmoreindeed/apollo-client-performance
Reproduction Steps
See the README for the apollo-client-performance repo.
@apollo/client
version3.8.10
The text was updated successfully, but these errors were encountered: