Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(single-node): add memory allocation #19895

Merged
merged 2 commits into from
Jan 2, 2025
Merged

feat(single-node): add memory allocation #19895

merged 2 commits into from
Jan 2, 2025

Conversation

fuyufjh
Copy link
Member

@fuyufjh fuyufjh commented Dec 23, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Follow up of #19477. This makes it work in single-node mode.

Checklist

  • I have written necessary rustdoc comments.
  • I have added necessary unit tests and integration tests.
  • I have added test labels as necessary.
  • I have added fuzzing tests or opened an issue to track them.
  • My PR contains breaking changes.
  • My PR changes performance-critical code, so I will run (micro) benchmarks and present the results.
  • My PR contains critical fixes that are necessary to be merged into the latest release.

Documentation

  • My PR needs documentation updates.
Release note

compactor_opts.compactor_total_memory_bytes = memory_for_compactor(system_total_mem);
compute_opts.total_memory_bytes = system_total_mem
- memory_for_frontend(system_total_mem)
- memory_for_compactor(system_total_mem);
Copy link
Collaborator

@hzxa21 hzxa21 Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a discussion of @Li0k recently on the memory allocation between CN and compator in standalone.
Let's say CN_MEM = x * Total_MEM, Compactor_MEM = y * Total_MEM, FE_MEM = z * Total_MEM, the choices are

  1. x + y + z = 1.

    • Pros: more likely to avoid OOM.
    • Cons: less efficient on memory usage because when loads on compaction/FE is not large, the memory will be wasted and cannot used by CN.
  2. x + y + z > 1

    • Pros: More efficient on memory usage because CN operator cache can use the idle memory when compactor/FE is not fully loaded.
    • Cons: More vulnerable to OOM because CN operator cache eviction is lazy.

I am more leaning towards 2 because it fits more to the CN dynamic operator cache design. For example, we can have x = y = z = 0.8 as well as the gradient allocation. WDYT?

Copy link
Member Author

@fuyufjh fuyufjh Dec 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CN operator cache is special - keep in mind that it's controlled according to the process-level jemalloc statistics. (See memory_manager_target_bytes in next line)

Here the 3 options

  • frontend_opts.frontend_total_memory_bytes
  • compactor_opts.compactor_total_memory_bytes
  • compute_opts.total_memory_bytes

Mostly decides decides the storage memory, including uploading buffer and meta & block cache:

  • frontend_opts.frontend_total_memory_bytes (code here)
    • batch_memory_limit
  • compactor_opts.compactor_total_memory_bytes (code here)
    • meta_cache_capacity_bytes
    • compactor_memory_limit_bytes
  • compute_opts.total_memory_bytes decides (code here)
    • block_cache_capacity_mb
    • meta_cache_capacity_mb
    • shared_buffer_capacity_mb
    • (embedded) compactor_memory_limit_mb

As you are more familiar with storage, I'll follow your decisions.

Besides, it doesn't count the Meta's memory usage (because there is no such as option), so actually it's already > 1.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the example above

x = y = z = 0.8

Total memory is well over 1.0 (0.8 * 3 = 2.4)
My concern is that when we use “excessive” configurations, it will cause the CN's memory to be evicted faster (according to jemalloc statistics), which will severely weaken the CN's capabilities.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also in favor of option 2, but it seems that the limit should not be exceeded. For example, it is only allowed to exceed 0.5 or less, in order to guarantee the capacity of the CN.

  • The memory footprint of the compactor is not a long term “cache”, so it is freed when it is used up, and it can be more “over-occupied”.
  • The FE footprint seems to come from batch query? I'm not sure it's released in time cc @fuyufjh @hzxa21

Copy link
Contributor

@Li0k Li0k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@fuyufjh fuyufjh added this pull request to the merge queue Jan 2, 2025
Merged via the queue into main with commit cb537c6 Jan 2, 2025
29 of 30 checks passed
@fuyufjh fuyufjh deleted the eric/single_node branch January 2, 2025 04:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants