Skip to content

[Draft] Parametrize 2 Js/Fs benchmarks#5490

Draft
mprimi wants to merge 1 commit intonats-io:mainfrom
mprimi:marco/parametrize_filestore_benchmarks
Draft

[Draft] Parametrize 2 Js/Fs benchmarks#5490
mprimi wants to merge 1 commit intonats-io:mainfrom
mprimi:marco/parametrize_filestore_benchmarks

Conversation

@mprimi
Copy link
Copy Markdown
Contributor

@mprimi mprimi commented Jun 5, 2024

DRAFT for early feedback

Parametrize 2 of the new JetStream FileStore benchmarks to collect more data by running multiple configurations similar-but-different from base-case.

Signed-off-by: Your Name marco@nats.io

Example results:

 go test ./server/ --run '^NOOP$' --bench 'Benchmark_FileStore(SelectMsgBlock|LoadNextMsgSameFilterAsStream)' --count 3
goos: darwin
goarch: arm64
pkg: github.com/nats-io/nats-server/v2/server
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:100,msgCount:1000-8             94922935                12.50 ns/op     8001.87 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:100,msgCount:1000-8             95003413                12.47 ns/op     8019.95 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:100,msgCount:1000-8             96121112                12.46 ns/op     8028.46 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:1000,msgCount:1000-8            96347498                12.36 ns/op     80905.26 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:1000,msgCount:1000-8            94877598                12.41 ns/op     80564.23 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:1000,msgCount:1000-8            95427434                12.42 ns/op     80489.93 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:100,msgCount:10000-8            34678740                32.24 ns/op     3101.94 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:100,msgCount:10000-8            37309516                30.96 ns/op     3229.89 MB/s
Benchmark_FileStoreSelectMsgBlock/blockSize:128,msgSize:100,msgCount:10000-8            37618242                31.55 ns/op     3169.30 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:10,msgCount:100000,subjects:1024-8              5034210               238.9 ns/op        41.86 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:10,msgCount:100000,subjects:1024-8              4703763               235.0 ns/op        42.55 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:10,msgCount:100000,subjects:1024-8              5018709               236.0 ns/op        42.37 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:100,msgCount:1000,subjects:1024-8               5022440               236.1 ns/op       423.64 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:100,msgCount:1000,subjects:1024-8               5030818               237.2 ns/op       421.66 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:100,msgCount:1000,subjects:1024-8               5030712               235.8 ns/op       424.02 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:10,msgCount:1000000,subjects:10240-8            5044974               235.8 ns/op        42.41 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:10,msgCount:1000000,subjects:10240-8            5067390               236.9 ns/op        42.21 MB/s
Benchmark_FileStoreLoadNextMsgSameFilterAsStream/msgSize:10,msgCount:1000000,subjects:10240-8            5038150               236.7 ns/op        42.25 MB/s
PASS
ok      github.com/nats-io/nats-server/v2/server        50.270s

Signed-off-by: Marco Primi <marco@nats.io>
Copy link
Copy Markdown
Member

@derekcollison derekcollison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general love the idea. For this benchmark, subjects and msgs diversity per block really don't matter. We need to do a proper maple tree for this, but for now lookup by sequence which this is testing the first step of that is a modified linear (if small) or binary search.

However, this parametrization will be awesome for subject addressing layer which is way more complicated and is focus for filestore v2.

@mprimi
Copy link
Copy Markdown
Contributor Author

mprimi commented Jun 5, 2024

@derekcollison I'll proceed to parametrize the rest. Once that's done you can supply sensible parameter values that make the most sense to test

Comment thread server/filestore_test.go
defer fs.mu.RUnlock()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, mb := fs.selectMsgBlockWithIndex(1 + uint64(i%params.msgCount))
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n.b. this was always set to 1 in the original benchmark, now it rolls around, could also be random

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the value has meaning for the test or benchmark I try to call it out.

Comment thread server/filestore_test.go
var smv StoreMsg
for i := 0; i < b.N; i++ {
// Needs to start at ~1 to show slowdown.
_, _, err := fs.LoadNextMsg("foo.*", true, 10, &smv)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the hardcoded 10 here also be randomized or vary?
Maybe (n % 100) if we want to keep it consistently small?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not for this one.

@github-actions github-actions Bot added the stale This issue has had no activity in a while label Aug 1, 2024
@neilalexander
Copy link
Copy Markdown
Member

Is this still relevant / for rebasing?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

stale This issue has had no activity in a while

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants