-
Couldn't load subscription status.
- Fork 743
Optimize HNSW Memory Allocation with Chunked Arrays #633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Optimize HNSW Memory Allocation with Chunked Arrays #633
Conversation
…tempt to fix Python bindings.
…ng in Python bindings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. Thanks for upstreaming this change, Michael!
|
LGTM! In case not already done, it would be useful to run a sanity test with an ASAN build. |
|
@yairgott : ASAN tests are already being run in GitHub Actions. |
Original patch was created by @PingXie (reviewed by @yairgott). These changes originated in the valkey-search project.
Original commit message below:
This change improves the memory allocation strategy of the HNSW index data structures by adopting chunked arrays. Previously, resizing the HNSW index involved expensive realloc calls, which were particularly costly in two ways:
Memory Consumption: During resizing, realloc temporarily requires double the memory to accommodate the old data block until the new block is ready and the old block is freed. This posed a substantial issue when dealing with gigabytes of data, leading to excessive memory usage.
Performance Overhead: The realloc operation entails copying data from the old block to the new block, invoking an expensive memcpy operation. This process becomes increasingly burdensome as the data size grows, resulting in significant performance degradation for large-scale indices.
By transitioning to chunked arrays for memory allocation, we circumvent the need for realloc calls during resizing. This approach not only reduces the memory footprint by eliminating the temporary doubling of memory but also enhances overall performance by avoiding the costly memcpy operations. The chunked array strategy allocates new memory in smaller, manageable chunks as the data grows, ensuring more efficient use of memory and improving the scalability of HNSW index operations, especially critical for indices spanning multiple gigabytes.