-
Notifications
You must be signed in to change notification settings - Fork 544
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
a3300e9
commit 9ff9d8c
Showing
7 changed files
with
461 additions
and
435 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
goos: linux | ||
goarch: amd64 | ||
pkg: github.com/grafana/mimir/pkg/scheduler/queue | ||
cpu: AMD Ryzen 9 PRO 6950H with Radeon Graphics | ||
BenchmarkConcurrentQueueOperations/1_tenants/10_concurrent_producers/16_concurrent_consumers-16 3375270 3582 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/10_concurrent_producers/160_concurrent_consumers-16 3158005 3817 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/10_concurrent_producers/1600_concurrent_consumers-16 2469280 4872 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/25_concurrent_producers/16_concurrent_consumers-16 3133611 3877 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/25_concurrent_producers/160_concurrent_consumers-16 2882362 4214 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/25_concurrent_producers/1600_concurrent_consumers-16 2554503 4684 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/10_concurrent_producers/16_concurrent_consumers-16 3180195 3682 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/10_concurrent_producers/160_concurrent_consumers-16 3052290 3901 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/10_concurrent_producers/1600_concurrent_consumers-16 2420862 4755 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/25_concurrent_producers/16_concurrent_consumers-16 3111976 3767 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/25_concurrent_producers/160_concurrent_consumers-16 3040722 3943 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/25_concurrent_producers/1600_concurrent_consumers-16 2566794 4633 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/10_concurrent_producers/16_concurrent_consumers-16 3222010 3926 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/10_concurrent_producers/160_concurrent_consumers-16 2796484 4410 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/10_concurrent_producers/1600_concurrent_consumers-16 2399295 4669 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/25_concurrent_producers/16_concurrent_consumers-16 3091362 3847 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/25_concurrent_producers/160_concurrent_consumers-16 2992759 3959 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/25_concurrent_producers/1600_concurrent_consumers-16 2485108 4807 ns/op | ||
PASS | ||
ok github.com/grafana/mimir/pkg/scheduler/queue 292.419s |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
goos: linux | ||
goarch: amd64 | ||
pkg: github.com/grafana/mimir/pkg/scheduler/queue | ||
cpu: AMD Ryzen 9 PRO 6950H with Radeon Graphics | ||
BenchmarkConcurrentQueueOperations/1_tenants/10_concurrent_producers/16_concurrent_consumers-16 3375884 3474 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/10_concurrent_producers/160_concurrent_consumers-16 3243482 3666 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/10_concurrent_producers/1600_concurrent_consumers-16 2816721 4222 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/25_concurrent_producers/16_concurrent_consumers-16 3460167 3495 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/25_concurrent_producers/160_concurrent_consumers-16 3213451 3730 ns/op | ||
BenchmarkConcurrentQueueOperations/1_tenants/25_concurrent_producers/1600_concurrent_consumers-16 2842573 4287 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/10_concurrent_producers/16_concurrent_consumers-16 3229412 3664 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/10_concurrent_producers/160_concurrent_consumers-16 3111933 3913 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/10_concurrent_producers/1600_concurrent_consumers-16 2624695 4583 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/25_concurrent_producers/16_concurrent_consumers-16 3245578 3680 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/25_concurrent_producers/160_concurrent_consumers-16 3026472 3864 ns/op | ||
BenchmarkConcurrentQueueOperations/10_tenants/25_concurrent_producers/1600_concurrent_consumers-16 2503609 4762 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/10_concurrent_producers/16_concurrent_consumers-16 3292596 3629 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/10_concurrent_producers/160_concurrent_consumers-16 3137596 3817 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/10_concurrent_producers/1600_concurrent_consumers-16 2726665 4568 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/25_concurrent_producers/16_concurrent_consumers-16 3080842 3869 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/25_concurrent_producers/160_concurrent_consumers-16 2937258 3981 ns/op | ||
BenchmarkConcurrentQueueOperations/1000_tenants/25_concurrent_producers/1600_concurrent_consumers-16 2483106 4968 ns/op | ||
PASS | ||
ok github.com/grafana/mimir/pkg/scheduler/queue 289.483s |
234 changes: 234 additions & 0 deletions
234
pkg/scheduler/queue/tree/tenant_deletion_benchmark_test.go
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,234 @@ | ||
package tree | ||
|
||
import ( | ||
"fmt" | ||
"math/rand" | ||
"slices" | ||
"strconv" | ||
"testing" | ||
) | ||
|
||
type fullTenantDeleteFunc func(tenantNodes map[string][]*Node, tenantIDOrder []string, tenant string) (map[string][]*Node, []string) | ||
|
||
func baseline(tenantNodes map[string][]*Node, tenantIDOrder []string, tenant string) (map[string][]*Node, []string) { | ||
// delete from shared tenantNodes | ||
for i := range tenantNodes[tenant] { | ||
// only ever going to have a slice of length one here for simplicity | ||
// and best representation of the actual case we are benchmarking; | ||
// no need to check if this is the dequeuedFrom node | ||
//if tenantNode == dequeuedFrom { | ||
tenantNodes[tenant] = append(tenantNodes[tenant][:i], tenantNodes[tenant][i+1:]...) | ||
//} | ||
} | ||
|
||
for idx, name := range tenantIDOrder { | ||
if name == tenant { | ||
tenantIDOrder[idx] = string(emptyTenantID) | ||
} | ||
} | ||
|
||
// clear all sequential empty elements from tenantIDOrder | ||
lastElementIndex := len(tenantIDOrder) - 1 | ||
for i := lastElementIndex; i >= 0 && tenantIDOrder[i] == ""; i-- { | ||
tenantIDOrder = tenantIDOrder[:i] | ||
} | ||
return tenantNodes, tenantIDOrder | ||
} | ||
|
||
func breakEarly(tenantNodes map[string][]*Node, tenantIDOrder []string, tenant string) (map[string][]*Node, []string) { | ||
// delete from shared tenantNodes | ||
for i := range tenantNodes[tenant] { | ||
// only ever going to have a slice of length one here for simplicity | ||
// and best representation of the actual case we are benchmarking; | ||
// no need to check if this is the dequeuedFrom node | ||
//if tenantNode == dequeuedFrom { | ||
tenantNodes[tenant] = append(tenantNodes[tenant][:i], tenantNodes[tenant][i+1:]...) | ||
//} | ||
} | ||
|
||
for idx, name := range tenantIDOrder { | ||
if name == tenant { | ||
tenantIDOrder[idx] = string(emptyTenantID) | ||
break | ||
} | ||
} | ||
|
||
// clear all sequential empty elements from tenantIDOrder | ||
lastElementIndex := len(tenantIDOrder) - 1 | ||
for i := lastElementIndex; i >= 0 && tenantIDOrder[i] == ""; i-- { | ||
tenantIDOrder = tenantIDOrder[:i] | ||
} | ||
return tenantNodes, tenantIDOrder | ||
} | ||
|
||
func breakEarlyShrinkOnce(tenantNodes map[string][]*Node, tenantIDOrder []string, tenant string) (map[string][]*Node, []string) { | ||
// delete from shared tenantNodes | ||
for i := range tenantNodes[tenant] { | ||
// only ever going to have a slice of length one here for simplicity | ||
// and best representation of the actual case we are benchmarking; | ||
// no need to check if this is the dequeuedFrom node | ||
//if tenantNode == dequeuedFrom { | ||
tenantNodes[tenant] = slices.Delete(tenantNodes[tenant], i, i+1) | ||
//} | ||
} | ||
|
||
for idx, name := range tenantIDOrder { | ||
if name == tenant { | ||
tenantIDOrder[idx] = string(emptyTenantID) | ||
break | ||
} | ||
} | ||
|
||
emptyTenantIDsAtEnd := 0 | ||
for i := len(tenantIDOrder) - 1; i >= 0 && tenantIDOrder[i] == ""; i-- { | ||
emptyTenantIDsAtEnd++ | ||
} | ||
tenantIDOrder = slices.Delete( | ||
tenantIDOrder, | ||
len(tenantIDOrder)-emptyTenantIDsAtEnd, | ||
len(tenantIDOrder), | ||
) | ||
|
||
return tenantNodes, tenantIDOrder | ||
} | ||
|
||
func breakEarlyShrinkOnceNoSlices(tenantNodes map[string][]*Node, tenantIDOrder []string, tenant string) (map[string][]*Node, []string) { | ||
// delete from shared tenantNodes | ||
for i := range tenantNodes[tenant] { | ||
// only ever going to have a slice of length one here for simplicity | ||
// and best representation of the actual case we are benchmarking; | ||
// no need to check if this is the dequeuedFrom node | ||
//if tenantNode == dequeuedFrom { | ||
tenantNodes[tenant] = append(tenantNodes[tenant][:i], tenantNodes[tenant][i+1:]...) | ||
//} | ||
} | ||
|
||
for idx, name := range tenantIDOrder { | ||
if name == tenant { | ||
tenantIDOrder[idx] = string(emptyTenantID) | ||
break | ||
} | ||
} | ||
|
||
emptyTenantIDsAtEnd := 0 | ||
for i := len(tenantIDOrder) - 1; i >= 0 && tenantIDOrder[i] == ""; i-- { | ||
emptyTenantIDsAtEnd++ | ||
} | ||
tenantIDOrder = tenantIDOrder[:len(tenantIDOrder)-emptyTenantIDsAtEnd] | ||
|
||
return nil, tenantIDOrder | ||
} | ||
|
||
func BenchmarkFullTenantDeletion(b *testing.B) { | ||
tenantCount := 10000 | ||
|
||
var testCases = []struct { | ||
name string | ||
deleteFunc fullTenantDeleteFunc | ||
tenantNodes map[string][]*Node | ||
tenantRotationOrder []string | ||
tenantDeletionOrder []string | ||
}{ | ||
{"baseline", baseline, nil, nil, nil}, | ||
{"breakEarlyShrinkOnce", breakEarlyShrinkOnce, nil, nil, nil}, | ||
{"breakEarlyShrinkOnceNoSlices", breakEarlyShrinkOnceNoSlices, nil, nil, nil}, | ||
} | ||
|
||
makeStubs := func() (tenantNodes map[string][]*Node, tenantRotationOrder, tenantDeletionOrder []string) { | ||
tenantNodes = make(map[string][]*Node, tenantCount) | ||
tenantRotationOrder = make([]string, tenantCount) | ||
|
||
for i := range tenantCount { | ||
tenantRotationOrder[i] = strconv.Itoa(i) | ||
tenantNodes[strconv.Itoa(i)] = append(tenantNodes[strconv.Itoa(i)], &Node{}) | ||
} | ||
|
||
tenantDeletionOrder = make([]string, tenantCount) | ||
copy(tenantDeletionOrder, tenantRotationOrder) | ||
rand.Shuffle(len(tenantDeletionOrder), func(i, j int) { | ||
tenantDeletionOrder[i], tenantDeletionOrder[j] = tenantDeletionOrder[j], tenantDeletionOrder[i] | ||
}) | ||
|
||
return tenantNodes, tenantRotationOrder, tenantDeletionOrder | ||
} | ||
|
||
for _, testCase := range testCases { | ||
b.Run(fmt.Sprintf("delete_tenant_func_%s", testCase.name), func(b *testing.B) { | ||
for i := 0; i < b.N; i++ { | ||
tenantNodes, tenantRotationOrder, tenantDeletionOrder := makeStubs() | ||
testCase.tenantNodes = tenantNodes | ||
testCase.tenantRotationOrder = tenantRotationOrder | ||
testCase.tenantDeletionOrder = tenantDeletionOrder | ||
|
||
tenantDeleteIdx := 0 | ||
for len(testCase.tenantRotationOrder) > 0 { | ||
//fmt.Println("len(testCase.tenantRotationOrder): ", len(testCase.tenantRotationOrder)) | ||
testCase.tenantNodes, testCase.tenantRotationOrder = testCase.deleteFunc( | ||
testCase.tenantNodes, testCase.tenantRotationOrder, testCase.tenantDeletionOrder[tenantDeleteIdx], | ||
) | ||
tenantDeleteIdx++ | ||
} | ||
} | ||
}) | ||
} | ||
} | ||
|
||
type makeQueuePathFunc func(component, tenant string) QueuePath | ||
|
||
func baselineMakeQueuePath(component, tenant string) QueuePath { | ||
return append([]string{component}, tenant) | ||
} | ||
|
||
func noAppendMakeQueuePath(component, tenant string) QueuePath { | ||
return QueuePath{component, tenant} | ||
} | ||
|
||
func BenchmarkMakeQueuePath(b *testing.B) { | ||
tenantCount := 10000 | ||
|
||
tenants := make([]string, tenantCount) | ||
queueComponents := make([]string, tenantCount) | ||
for i := range tenantCount { | ||
tenants[i] = strconv.Itoa(i) | ||
queueComponents[i] = randAdditionalQueueDimension() | ||
} | ||
|
||
var testCases = []struct { | ||
name string | ||
pathFunc makeQueuePathFunc | ||
}{ | ||
{"baselineMakeQueuePath", baselineMakeQueuePath}, | ||
{"noAppendMakeQueuePath", noAppendMakeQueuePath}, | ||
} | ||
|
||
for _, testCase := range testCases { | ||
b.Run(fmt.Sprintf("queue_path_func_%s", testCase.name), func(b *testing.B) { | ||
for i := 0; i < b.N; i++ { | ||
for tenantIdx := range tenantCount { | ||
_ = testCase.pathFunc( | ||
queueComponents[tenantIdx], tenants[tenantIdx], | ||
) | ||
} | ||
} | ||
}) | ||
} | ||
} | ||
|
||
const ingesterQueueDimension = "ingester" | ||
const storeGatewayQueueDimension = "store-gateway" | ||
const ingesterAndStoreGatewayQueueDimension = "ingester-and-store-gateway" | ||
const unknownQueueDimension = "unknown" | ||
|
||
var secondQueueDimensionOptions = []string{ | ||
ingesterQueueDimension, | ||
storeGatewayQueueDimension, | ||
ingesterAndStoreGatewayQueueDimension, | ||
unknownQueueDimension, | ||
} | ||
|
||
func randAdditionalQueueDimension() string { | ||
maxIdx := len(secondQueueDimensionOptions) - 1 | ||
|
||
idx := rand.Intn(maxIdx) | ||
return secondQueueDimensionOptions[idx] | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
goos: linux | ||
goarch: amd64 | ||
pkg: github.com/grafana/mimir/pkg/scheduler/queue | ||
cpu: AMD Ryzen 9 PRO 6950H with Radeon Graphics | ||
│ baseline.txt │ break.txt │ | ||
│ sec/op │ sec/op vs base │ | ||
ConcurrentQueueOperations/1_tenants/10_concurrent_producers/16_concurrent_consumers-16 3.582µ ± ∞ ¹ 3.474µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1_tenants/10_concurrent_producers/160_concurrent_consumers-16 3.817µ ± ∞ ¹ 3.666µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1_tenants/10_concurrent_producers/1600_concurrent_consumers-16 4.872µ ± ∞ ¹ 4.222µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1_tenants/25_concurrent_producers/16_concurrent_consumers-16 3.877µ ± ∞ ¹ 3.495µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1_tenants/25_concurrent_producers/160_concurrent_consumers-16 4.214µ ± ∞ ¹ 3.730µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1_tenants/25_concurrent_producers/1600_concurrent_consumers-16 4.684µ ± ∞ ¹ 4.287µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/10_tenants/10_concurrent_producers/16_concurrent_consumers-16 3.682µ ± ∞ ¹ 3.664µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/10_tenants/10_concurrent_producers/160_concurrent_consumers-16 3.901µ ± ∞ ¹ 3.913µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/10_tenants/10_concurrent_producers/1600_concurrent_consumers-16 4.755µ ± ∞ ¹ 4.583µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/10_tenants/25_concurrent_producers/16_concurrent_consumers-16 3.767µ ± ∞ ¹ 3.680µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/10_tenants/25_concurrent_producers/160_concurrent_consumers-16 3.943µ ± ∞ ¹ 3.864µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/10_tenants/25_concurrent_producers/1600_concurrent_consumers-16 4.633µ ± ∞ ¹ 4.762µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1000_tenants/10_concurrent_producers/16_concurrent_consumers-16 3.926µ ± ∞ ¹ 3.629µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1000_tenants/10_concurrent_producers/160_concurrent_consumers-16 4.410µ ± ∞ ¹ 3.817µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1000_tenants/10_concurrent_producers/1600_concurrent_consumers-16 4.669µ ± ∞ ¹ 4.568µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1000_tenants/25_concurrent_producers/16_concurrent_consumers-16 3.847µ ± ∞ ¹ 3.869µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1000_tenants/25_concurrent_producers/160_concurrent_consumers-16 3.959µ ± ∞ ¹ 3.981µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
ConcurrentQueueOperations/1000_tenants/25_concurrent_producers/1600_concurrent_consumers-16 4.807µ ± ∞ ¹ 4.968µ ± ∞ ¹ ~ (p=1.000 n=1) ² | ||
geomean 4.164µ 3.987µ -4.26% | ||
¹ need >= 6 samples for confidence interval at level 0.95 | ||
² need >= 4 samples to detect a difference at alpha level 0.05 |