Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

indexserver: reduce defaults for compound shards #850

Merged
merged 1 commit into from
Oct 24, 2024

Conversation

stefanhengl
Copy link
Member

@stefanhengl stefanhengl commented Oct 24, 2024

Relates to SPLF-615

zoekt-merge-index can OOM on indexer instances with <=4GB MEM. Hence we lower the defaults. Larger instances should set the ENVs to higher values.

Test plan:
N/A

zoekt-merge-index can OOM on instances with <=4GB mem. Hence we adjust
the defaults. Larger instances can set the ENVs to higher values.

Test plan:
N/A
@stefanhengl stefanhengl requested a review from a team October 24, 2024 13:43
@cla-bot cla-bot bot added the cla-signed label Oct 24, 2024
@stefanhengl stefanhengl merged commit bfd8ee8 into main Oct 24, 2024
9 checks passed
@stefanhengl stefanhengl deleted the sh/reduce-defaults-for-compound-shards branch October 24, 2024 14:26
@@ -1264,8 +1264,8 @@ func (rc *rootConfig) registerRootFlags(fs *flag.FlagSet) {
fs.BoolVar(&rc.disableShardMerging, "shard_merging", getEnvWithDefaultBool("SRC_DISABLE_SHARD_MERGING", false), "disable shard merging")
fs.DurationVar(&rc.vacuumInterval, "vacuum_interval", getEnvWithDefaultDuration("SRC_VACUUM_INTERVAL", 24*time.Hour), "run vacuum this often")
fs.DurationVar(&rc.mergeInterval, "merge_interval", getEnvWithDefaultDuration("SRC_MERGE_INTERVAL", 8*time.Hour), "run merge this often")
fs.Int64Var(&rc.targetSize, "merge_target_size", getEnvWithDefaultInt64("SRC_MERGE_TARGET_SIZE", 2000), "the target size of compound shards in MiB")
fs.Int64Var(&rc.minSize, "merge_min_size", getEnvWithDefaultInt64("SRC_MERGE_MIN_SIZE", 1800), "the minimum size of a compound shard in MiB")
fs.Int64Var(&rc.targetSize, "merge_target_size", getEnvWithDefaultInt64("SRC_MERGE_TARGET_SIZE", 1000), "the target size of compound shards in MiB")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stefanhengl very late question here: why is the limit for compound shards still so much higher than our default shard size limit of 100MB?

Copy link
Member Author

@stefanhengl stefanhengl Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purpose of compound shards is to amortize some of the constant costs we pay per shard (match tree construction, trigrams in memory). The bigger the compound shard the better. The reason is that, for big compound shards, we already have most trigrams in memory, so adding another repo to it comes for free. There is a graph in this blog post that shows the separation between N simple shards and 1 compound shard.

The size of the compound shard is only limited by memory and the supported max file size.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is more performant to search a simple shard though. That's why we only put rarely updated small repos into compound shards. We use size and update frequency as a very rough proxy for "less important".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks for the background context! It would be nice if we had the same "max shard size" limit everywhere -- it's an easier mental model, so we know both index and merge should be bounded by roughly the same memory. But I'm guessing we tested this when we developed shard merging and found that the larger shards are beneficial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants