Skip to content

feat(single-node): add memory allocation #19895

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 2, 2025
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions src/cmd_all/src/single_node.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@
use clap::Parser;
use home::home_dir;
use risingwave_common::config::MetaBackend;
use risingwave_common::util::resource_util::memory::system_memory_available_bytes;
use risingwave_compactor::CompactorOpts;
use risingwave_compute::memory::config::gradient_reserve_memory_bytes;
use risingwave_compute::ComputeNodeOpts;
use risingwave_frontend::FrontendOpts;
use risingwave_meta_node::MetaNodeOpts;
Expand Down Expand Up @@ -207,6 +209,16 @@ pub fn map_single_node_opts_to_standalone_opts(opts: SingleNodeOpts) -> ParsedSt
frontend_opts.meta_addr = meta_addr.parse().unwrap();
compactor_opts.meta_address = meta_addr.parse().unwrap();

// Allocate memory for each node
let system_total_mem = system_memory_available_bytes();
frontend_opts.frontend_total_memory_bytes = memory_for_frontend(system_total_mem);
compactor_opts.compactor_total_memory_bytes = memory_for_compactor(system_total_mem);
compute_opts.total_memory_bytes = system_total_mem
- memory_for_frontend(system_total_mem)
- memory_for_compactor(system_total_mem);
Copy link
Collaborator

@hzxa21 hzxa21 Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a discussion of @Li0k recently on the memory allocation between CN and compator in standalone.
Let's say CN_MEM = x * Total_MEM, Compactor_MEM = y * Total_MEM, FE_MEM = z * Total_MEM, the choices are

  1. x + y + z = 1.

    • Pros: more likely to avoid OOM.
    • Cons: less efficient on memory usage because when loads on compaction/FE is not large, the memory will be wasted and cannot used by CN.
  2. x + y + z > 1

    • Pros: More efficient on memory usage because CN operator cache can use the idle memory when compactor/FE is not fully loaded.
    • Cons: More vulnerable to OOM because CN operator cache eviction is lazy.

I am more leaning towards 2 because it fits more to the CN dynamic operator cache design. For example, we can have x = y = z = 0.8 as well as the gradient allocation. WDYT?

Copy link
Member Author

@fuyufjh fuyufjh Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CN operator cache is special - keep in mind that it's controlled according to the process-level jemalloc statistics. (See memory_manager_target_bytes in next line)

Here the 3 options

  • frontend_opts.frontend_total_memory_bytes
  • compactor_opts.compactor_total_memory_bytes
  • compute_opts.total_memory_bytes

Mostly decides decides the storage memory, including uploading buffer and meta & block cache:

  • frontend_opts.frontend_total_memory_bytes (code here)
    • batch_memory_limit
  • compactor_opts.compactor_total_memory_bytes (code here)
    • meta_cache_capacity_bytes
    • compactor_memory_limit_bytes
  • compute_opts.total_memory_bytes decides (code here)
    • block_cache_capacity_mb
    • meta_cache_capacity_mb
    • shared_buffer_capacity_mb
    • (embedded) compactor_memory_limit_mb

As you are more familiar with storage, I'll follow your decisions.

Besides, it doesn't count the Meta's memory usage (because there is no such as option), so actually it's already > 1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the example above

x = y = z = 0.8

Total memory is well over 1.0 (0.8 * 3 = 2.4)
My concern is that when we use “excessive” configurations, it will cause the CN's memory to be evicted faster (according to jemalloc statistics), which will severely weaken the CN's capabilities.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also in favor of option 2, but it seems that the limit should not be exceeded. For example, it is only allowed to exceed 0.5 or less, in order to guarantee the capacity of the CN.

  • The memory footprint of the compactor is not a long term “cache”, so it is freed when it is used up, and it can be more “over-occupied”.
  • The FE footprint seems to come from batch query? I'm not sure it's released in time cc @fuyufjh @hzxa21

compute_opts.memory_manager_target_bytes =
Some(gradient_reserve_memory_bytes(system_total_mem));

// Apply node-specific options
if let Some(total_memory_bytes) = opts.node_opts.total_memory_bytes {
compute_opts.total_memory_bytes = total_memory_bytes;
Expand Down Expand Up @@ -234,3 +246,19 @@ pub fn map_single_node_opts_to_standalone_opts(opts: SingleNodeOpts) -> ParsedSt
compactor_opts: Some(compactor_opts),
}
}

fn memory_for_frontend(total_memory_bytes: usize) -> usize {
if total_memory_bytes <= (16 << 30) {
total_memory_bytes / 8
} else {
(total_memory_bytes - (16 << 30)) / 16 + (16 << 30) / 8
}
}

fn memory_for_compactor(total_memory_bytes: usize) -> usize {
if total_memory_bytes <= (16 << 30) {
total_memory_bytes / 8
} else {
(total_memory_bytes - (16 << 30)) / 16 + (16 << 30) / 8
}
}
10 changes: 9 additions & 1 deletion src/compute/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,8 @@ pub struct ComputeNodeOpts {
pub total_memory_bytes: usize,

/// Reserved memory for the compute node in bytes.
/// If not set, a portion (default to 30%) for the `total_memory_bytes` will be used as the reserved memory.
/// If not set, a portion (default to 30% for the first 16GB and 20% for the rest)
/// for the `total_memory_bytes` will be used as the reserved memory.
///
/// The total memory compute and storage can use is `total_memory_bytes` - `reserved_memory_bytes`.
#[clap(long, env = "RW_RESERVED_MEMORY_BYTES")]
Expand All @@ -97,6 +98,13 @@ pub struct ComputeNodeOpts {
/// If not set, the default value is `total_memory_bytes` - `reserved_memory_bytes`
///
/// It's strongly recommended to set it for standalone deployment.
///
/// ## Why need this?
///
/// Our [`crate::memory::manager::MemoryManager`] works by reading the memory statistics from
/// Jemalloc. This is fine when running the compute node alone; while for standalone mode,
/// the memory usage of **all nodes** are counted. Thus, we need to pass a reasonable total
/// usage so that the memory is kept around this value.
#[clap(long, env = "RW_MEMORY_MANAGER_TARGET_BYTES")]
pub memory_manager_target_bytes: Option<usize>,

Expand Down
2 changes: 1 addition & 1 deletion src/compute/src/memory/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ pub fn reserve_memory_bytes(opts: &ComputeNodeOpts) -> (usize, usize) {
/// The reserved memory size is calculated based on the following gradient:
/// - 30% of the first 16GB
/// - 20% of the rest
fn gradient_reserve_memory_bytes(total_memory_bytes: usize) -> usize {
pub fn gradient_reserve_memory_bytes(total_memory_bytes: usize) -> usize {
let mut total_memory_bytes = total_memory_bytes;
let mut reserved = 0;
for i in 0..RESERVED_MEMORY_LEVELS.len() {
Expand Down
Loading