You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you all for all the work on boost/beast, it is a great library to use. I am working on a project which needs an embedded HTTP server, and I switched from civetweb to beast. One thing I am wondering is the actual achievable latency of beast. My project will be doing a large amount of HTTP requests so throughput and latency matter to me.
I run the http awaitable server with 1.86 against wrk. The benchmarking result is ranging from 250us to 400us. Is this the lowest latency I can achieve with boost beast, or am i missing anything here?
Hi, sorry for the late response. I somehow missed the notification for this issue.
There are a few strategies that can help lower latency:
Preload files into a container (e.g., std::string) during startup. Use http::string_body instead of file_body to avoid blocking system calls for file reads.
If you're testing with 80 connections and 16 threads, requests will be queued (there is no computing resource to respond). This means you're measuring queue latency rather than pure processing latency.
Prevent the OS scheduler from moving threads between cores. Here's an example for Linux:
Hey,
Thank you all for all the work on boost/beast, it is a great library to use. I am working on a project which needs an embedded HTTP server, and I switched from civetweb to beast. One thing I am wondering is the actual achievable latency of beast. My project will be doing a large amount of HTTP requests so throughput and latency matter to me.
I run the http awaitable server with 1.86 against wrk. The benchmarking result is ranging from 250us to 400us. Is this the lowest latency I can achieve with boost beast, or am i missing anything here?
Regards,
An example is as follows.
The text was updated successfully, but these errors were encountered: