-
-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response time for NextJS version seems too high #1118
Comments
https://asyncapi-studio-studio-next.vercel.app is built with Please check if it has the same behavior. This build includes |
Yes, in your case @aeworxet the time taken is much less for even the first hit. The |
@aeworxet's instance is deployed on Vercel, when I try adding the same optimization he mentioned in Netlify the response time reduces significantly for me as well. Thanks @aeworxet My instance can be found here: https://studio-helios2002.netlify.app |
That doesn't seem to happen. Those are the times when requesting your site:
The diff in terms of the response between your app and @aeworxet is the fact the second is always cached: I guess we should do some local testing first rather than relying on CDN. |
Are those the times to load the entire page? By less time I meant to fetch the initial HTML that is server-side rendered and contains the meta tags. |
Because these are times I see on my side:
|
What I shared is the time of a
Aren't you hitting cache? |
These are the times I got for a fresh document itself, so I don't think I am hitting the cache and also the time for the first response is much higher than the next 4. |
What do you mean by first response? The first response after a deployment? |
Nope, I mean that I send 5 requests continuously to the site, and the time it takes to obtain the |
Ok, meta-tags. Interesting. I'm just doing a curl. Not sure what the client you use is doing. Anyway, the issue is still there. For the record, sharing two responses and their headers. As you can see there is no difference, not even in cache headers. The first request was made this early morning, the second right after the first: 1st
2nd
Here is the diff: 2c2
< headers: {"age":["35139"],
---
> headers: {"age":["35548"],
6c6
< "date":["Fri, 14 Jun 2024 05:09:43 GMT"],
---
> "date":["Fri, 14 Jun 2024 05:16:34 GMT"],
13,14c13,14
< "x-nextjs-date":["Fri, 14 Jun 2024 05:09:43 GMT"],
< "x-nf-request-id":["01J0AJDH1HTFWZK1YF8AAYZKSB"],
---
> "x-nextjs-date":["Fri, 14 Jun 2024 05:16:34 GMT"],
> "x-nf-request-id":["01J0AJT4HJ7V04BBH4Z48QQP9J"],
19,22c19,22
< time_lookup: 0.015968
< time_connect: 0.045507
< time_appconnect: 0.076425
< time_pretransfer: 0.076563
---
> time_lookup: 0.014502
> time_connect: 0.041648
> time_appconnect: 0.071193
> time_pretransfer: 0.071333
24c24
< time_starttransfer: 3.505065
---
> time_starttransfer: 0.530230
26c26
< time_total: 3.532842
---
> time_total: 0.558090
|
Even though in my case as well the entire document does need to be fetched this is the script I used for anyone wanting to try it out: https://gist.github.com/helios2003/2fdb65377a8b1580b91464cbc7a1d974 |
The theory that is gradually becoming more solid in my head is that we are always SSR. And that makes sense, because afaik NextJS is set up by default to SSR (even for static pages), then rely in cache. Post backing up someho my theory: https://answers.netlify.com/t/slow-initial-load-time-on-ssg-with-nextjs/46384/3 |
In order to validate this theory, I believe measuring the time since the request hits NextJS router until it serves the response should tell us the time spent on processing the request. The rest, would be time of spinning up such a serverless function. |
Additionally, can we check if we are using NextJS Runtime at Netlify? I have no permission to see build logs at https://app.netlify.com/sites/studio-next 🤷 . Build logs should show something like EDIT: Can you @helios2003 confirm https://studio-helios2002.netlify.app has the Netlify NextJS runtime enabled? So we can discard this as possible solution. |
@smoya Yes, the NextJS runtime is enabled in https://studio-helios2002.netlify.app/. |
@smoya I did some testing and I think your theory is right. BTW, we need this right? we need some part of the page to be rendered on the server so we can add the OpenGraph metadata? |
@KhudaDad414, can you tell why we aren't caching certain components during the build time itself and following Static Site Generation (SSG). |
Hi @smoya, I'm new to the AsynApi community, but I disagree with some of your points here. Netlify Edge had used deno deploy which relies on V8 isolates and V8 isolates have been known for a fast start even when a cold start happens, it's different from what we see in virtual machines. My assumption here is probably related to some CSS package being downloaded during the initial startup, I will try to set the bundle analyzer and see what happens there Reference: |
The whole page is statically generated and cached currently. not just some components. we would be able to make the page generation dynamic in the future because we have to generate OpenGraph metadata at some point. First Scenario: No cache at CDN level (Netlify Edge) and had to cold start the server and get the Next.js cache. Second Scenario: cache at CDN level (Netlify Edge) Second Scenario: No cache at CDN(Netlify Edge) level but cache at Next.js. @jerensl I don't think the problem is with downloading some CSS. As you can see in the above examples the wait time increases in the The problems
Based on some tests that I have done on my fork, hosted here we can resolve this issue by having custom cache options in the response header: # Netlify CDN should keep the cache for 100 days.
CDN-Cache-Control: public, max-age=3640000, must-revalidate
# Other Layers (including browser) shouldn't do any caching.
Cache-Control: public, max-age=0, must-revalidate
After we add those headers the |
Yeah, you are right there something related to
But I think we can do something with 4sec coldown. Let me explain, after checking using bundle analyzer, I found And I'm running another test on this website https://studio-helios2002.netlify.app/, and found there are long running task in main thread related to monaco which identical to the cold start in nodejs(see the red arrow) And then I'm checking the code where monaco being declared using web worker Why there are no web worker tasks running here? Conclusion
|
After making some contributions on Modelina, I realized they did it so well with One thing I realized between Modelina and Studio is:
Let's check Theo's video here, he explains so well why App Router will mostly get a cold start and offers a solution to that problem https://www.youtube.com/watch?v=zsa9Ey9INEg&t=643s Solution:
|
main thread of client or server? if you mean on server the page is static and is only being built once (at build time). if you mean on the client then why it always doesn't have that 4 sec waiting time? and is on par with https://studio.asyncapi.com/ which is a normal CRA.
It does (at least the two workers that are supposed to. , main worker and yaml worker)
can you point out what feature do we need from |
I think the concept you mention is more related to the Pages router which is an old way of using NextJS without React Server components(RSC), the way you all implemented here is by using the App router which is built on top of React Server Components(RSC) by default. I also see you are using a Client Component, I think there are misconceptions about what it is supposed to do, what I know it's the Client Component will render both the server and client and do some technique called hydration to the Client Side by injecting some functionality, also see here how Dan Abramov explain about RSC in simple way In my opinion, NextJS App Router and Pages Router are two different types of framework, as the discussion here I think it's not be considered as different types of architecture and on to do list(React Server Components), App Router is more similar to Remix than Create React App. I also not find any decision around why you came up with the idea to use React Server Components here https://github.com/asyncapi/studio/blob/master/doc/adr/0007-use-nextjs.md Based on how it works in React Server Components, it's not surprising why we got 4 second cold start, because component render both happened in Client and Server Consider how huge the changes to use React Server Components, which makes us rethink how we are supposed to deal with server and client at the same time, which also makes some state managers think again about how their supposed to deal with it
If the get fix, it's good then, btw I running test on the website mention above which is https://studio-helios2002.netlify.app/ |
Thanks for the explanation @jerensl.
by
This would be valid if we had a Dynamically Rendered page. Since the Page is Statically Rendered and is Full Route Cached, the server side components won't render with requests but is rendered at build time and is served to the client as React Server Component Payload. Are you suggesting that a cold start invalidates cache and the page is rendered on the server again? |
'/' route I basically a server component that statically renders by default but it's work very different with client component which need a server rendering
But we still have components/StudioWrapper.tsx right? because of that the page '/' route which a static rendering before becomes dynamically rendering
No, but I'm suggesting trying to experiment with partial prerendering but keep in mind it is still an experiment feature, basically it will render statically render without waiting for dynamic rendering Reference: |
Yes it does. but only at build time. No server side code runs for statically generated routes. doesn't matter if there is a "use client" directive or no. Can you give an example that a pre-rendered as static route renders on server (other than build time of course)?
Sorry, I don't understand, how does a route with static rendering "becomes" dynamic? can you explain it a bit more? |
I think I got it wrong here but sure statically generated will run during build time to generate HTML except for client components during initial load without lazy loading SSR
It can happen under strict rules but not in our case, for example, if we have a cookie or turn no cache on fetch API
|
Full route cache (Statically generated route, if we can call it that) will only take effect when you are not opting out of it.
Well, since we are using Monaco and it can't be rendered on the server, plus the other two components (Navigation and Preview) are dependent on the state has to be generated on the client. the other two (Sidebar and the toolbar at the top) I am not sure if we can render them on the server. It may be possible.
It means do not try to load this at the Server side since it relies on the Some questions that I completely don't know the answer to and we need to answer to decide how we are going to structure the Application.
Which are out of the scope of this issue and needs to be discussed separately. |
RSC Client Component is already smart enough to separate what belongs to the client and what is on the server via hydration mechanism. Before RSC we need to use
Just so you know, before server components existed, react-query had a solution to managing this complexity of state between server and client as their claim as an asynchronous state. The implementation of RSC in react-query seems a bit complex and reminds me of why we are moving out from redux in the first place, and they are still figure it out how they will do it in the future https://tanstack.com/query/latest/docs/framework/react/guides/advanced-ssr. They also write a blog about trade-off to make on network waterfall https://tanstack.com/query/latest/docs/framework/react/guides/request-waterfalls. This network waterfall also make why remix is better then NextJS + RSC https://remix.run/blog/react-server-components#obsessed-with-ux Also, let's talk about network waterfall, seems like it has been a hot topic between NextJS + RSC vs Remix, the way NextJS did it by rewriting the fetch standard on the server. This solution is supposed to fix deduplication of the same fetch request and introduce it as the default caching behavior in NextJS as we have seen now but other frameworks like Remix insist it should not rewrite in the web standard and let the developer control their own caching behavior, this one also led to controversy and made the React Team remove fetch deduplication from RSC, let's see if NextJS will follow it or no https://www.youtube.com/watch?v=AKNH7mXciEM&t=920s. It's supposed to be a good answer but I'm also don't know yet, because we are at an awkward spot now as web developers. |
This issue has been automatically marked as stale because it has not had recent activity 😴 It will be closed in 120 days if no further activity occurs. To unstale this issue, add a comment with a detailed explanation. There can be many reasons why some specific issue has no activity. The most probable cause is lack of time, not lack of interest. AsyncAPI Initiative is a Linux Foundation project not owned by a single for-profit company. It is a community-driven initiative ruled under open governance model. Let us figure out together how to push this issue forward. Connect with us through one of many communication channels we established here. Thank you for your patience ❤️ |
Description
I'm working with @helios2003 in #224. In particular, I'm its mentor through GSoC2024.
As part of such project, we evaluated the possibility to run the AsyncAPI Parser-JS and parse AsyncaPI documents loaded via the
base64
query param.@helios2003 made a test measuring response time with and without the addition of the parsing (which would made the page from being statical to ssr). Our surprise came when we realized the NextJS version hosted in #224 took ~4 seconds to just serve the HTML for the Studio page (without even loading any doc on it!), just plain
/
.Today I decided to run another test, and confirmed the findings. However, a weird caching mechanism is happening.
Let me share with you 3 consecutive requests I made and the results:
[Next.js; hit, Netlify Edge; fwd=miss]
[Netlify Edge; hit]
[Next.js; hit, Netlify Edge; fwd=miss]
My assumption and understanding is that, for some reason:
The point is that I don't expect the first call to take 4 seconds but just as the third request since a request made to the root page should always give the same response (static). Besides that, no clue why the
cache-status
header in the 1st request saying the content was served as cached from NextJS.cc @Amzani @helios2003 @KhudaDad414
The text was updated successfully, but these errors were encountered: