-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
execute future in tokio::spawn causes more memory consumption. #7064
Comments
Please try to measure the memory using this utility: use core::sync::atomic::{AtomicUsize, Ordering::Relaxed};
use std::alloc::{GlobalAlloc, Layout, System};
struct TrackedAlloc {}
#[global_allocator]
static ALLOC: TrackedAlloc = TrackedAlloc;
static TOTAL_MEM: AtomicUsize = AtomicUsize::new(0);
unsafe impl GlobalAlloc for TrackedAlloc {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
let ret = System.alloc(layout);
if !ret.is_null() {
TOTAL_MEM.fetch_add(layout.size(), Relaxed);
}
ret
}
unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
TOTAL_MEM.fetch_sub(layout.size(), Relaxed);
System.dealloc(ptr, layout);
}
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 {
let ret = System.alloc_zeroed(layout);
if !ret.is_null() {
TOTAL_MEM.fetch_add(layout.size(), Relaxed);
}
ret
}
unsafe fn realloc(&self, ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8 {
let ret = System.realloc(ptr, layout, new_size);
if !ret.is_null() {
TOTAL_MEM.fetch_add(new_size.wrapping_sub(layout.size()), Relaxed);
}
ret
}
} |
without tokio::spawn:
with tokio::spawn:
results are identical, so why tokio::spawn consumes more memory in system monitor? is this because memory fragmentation or memory allocator cache? |
Memory allocators often hold on to memory you are not using so that future allocations are faster. That's most likely what is happening. Of course, fragmentation could also be a factor. Have you tried with jemalloc? |
it's okay if small amount of memory is cached by memory allocator for future use but 100MB is not small amount of memory, I think. I will try the same thing with jemalloc. |
with the following modification: use std::{future::Future, time::Duration};
use axum::{http::{HeaderName, HeaderValue}, routing::get, Router};
use axum_extensions::request_counter::RequestCounter;
use routes::accounts::account_management_route;
use tower_http::cors::Any;
mod axum_extensions;
mod routes;
use core::sync::atomic::{AtomicUsize, Ordering::Relaxed};
use std::alloc::{GlobalAlloc, Layout, System};
static _JEMALLOC: jemallocator::Jemalloc = jemallocator::Jemalloc{};
struct TrackedAlloc;
#[global_allocator]
static ALLOC: TrackedAlloc = TrackedAlloc;
static TOTAL_MEM: AtomicUsize = AtomicUsize::new(0);
unsafe impl GlobalAlloc for TrackedAlloc {
unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
let ret = _JEMALLOC.alloc(layout);
if !ret.is_null() {
TOTAL_MEM.fetch_add(layout.size(), Relaxed);
}
ret
}
unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
TOTAL_MEM.fetch_sub(layout.size(), Relaxed);
_JEMALLOC.dealloc(ptr, layout);
}
unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 {
let ret = _JEMALLOC.alloc_zeroed(layout);
if !ret.is_null() {
TOTAL_MEM.fetch_add(layout.size(), Relaxed);
}
ret
}
unsafe fn realloc(&self, ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8 {
let ret = _JEMALLOC.realloc(ptr, layout, new_size);
if !ret.is_null() {
TOTAL_MEM.fetch_add(new_size.wrapping_sub(layout.size()), Relaxed);
}
ret
}
}
#[tokio::main]
async fn main() {
let (host, port) = (
std::env::var("SERVER_HOST").unwrap_or("0.0.0.0".to_string()),
std::env::var("SERVER_PORT").unwrap_or("7999".to_string()).parse().unwrap_or(7999),
);
let app = Router::new()
.route("/", get(|| async {
"Welcome to our customer service."
}))
.route("/memory", get(|| async {
let bytes_of_mem = TOTAL_MEM.load(Relaxed);
format!("{} bytes = {} kilobytes = {} megabytes", bytes_of_mem, bytes_of_mem / 1024, bytes_of_mem / 1024 / 1024)
}))
.nest("/api/v1", Router::new()
.nest("/account", account_management_route().await)
)
.layer(RequestCounter::new())
.layer(tower_http::cors::CorsLayer::new().allow_headers(Any).allow_methods(Any).allow_origin(Any))
.layer(tower_http::set_header::SetResponseHeaderLayer::appending(HeaderName::from_static("developer"), HeaderValue::from_static("Meshel DreamLab software technologies")))
.layer(tower_http::set_header::SetResponseHeaderLayer::appending(HeaderName::from_static("server"), HeaderValue::from_static("Rust + Tokio + Hyper + Axum")))
;
let task = async move {
let tcp_server = tokio::net::TcpListener::bind((host, port)).await.unwrap();
axum::serve(tcp_server, app).await.unwrap();
};
tokio::spawn(task).await.unwrap();
} jemalloc with tokio::spawn:
and plasma system monitor: jemalloc without tokio::spawn
plasma system monitor: |
Jemalloc does give cached memory back to the OS, but only after a delay. And it doesn't happen if the application is completely idle. You can try configuring jemalloc with |
after in my case this is totally acceptable, and this doesn't happen during normal execution. |
You're welcome. |
Version
Rust:
rustc 1.83.0 (90b35a623 2024-11-26)
Platform
The output of
uname -a
(UNIX), or version and 32 or 64-bit (Windows)Description
I'm working on a server-side project uses rust + tokio + tower + axum.
suddenly I noticed that my hello world axum http server takes almost 100MB of ram when I run load test with
ab -c 1000 -n 500000 http://0.0.0.0:7999/
and finally, I found why simple hello world program takes 100MB of ram.
this is caused when I run server initialization inside of tokio::spawn.
it consumes 10x more memory than running without tokio::spawn.
I tried this code:
main.rs
Cargo.toml
[code sample that causes the bug]
You can cause the same problem by commenting out ``// run_with_spawn(task).await;
and comment
run_without_spawn(task).await;`here is expected result: (without using tokio::spawn)
here is screenshot after 500,000 times load test with
ab -c 1000 -n 500000 http://0.0.0.0:7999/
when use tokio::spawnand memory never goes back to normal (meaning around 10MB).
The text was updated successfully, but these errors were encountered: