Skip to content

Commit

Permalink
improvement(perf): add validation rules for latency decorator
Browse files Browse the repository at this point in the history
Added validation rules for results sent by
`latency_calculator_decorator` to Argus.
Each workload and result name (nemesis, predefined step) may set own
rules.

Current rules were created based on existing results - to pass typical
good results.

closes: #9237
  • Loading branch information
soyacz authored and fruch committed Dec 23, 2024
1 parent 03cf4e5 commit bc3552a
Show file tree
Hide file tree
Showing 17 changed files with 308 additions and 22 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
latency_decorator_error_thresholds:
write:
_mgmt_repair_cli:
duration:
fixed_limit: 7200
_terminate_and_wait:
duration:
fixed_limit: 450
add_new_nodes:
duration:
fixed_limit: 2500
decommission_nodes:
duration:
fixed_limit: 1800
replace_node:
duration:
fixed_limit: 3600

read:
_mgmt_repair_cli:
duration:
fixed_limit: 3200
_terminate_and_wait:
duration:
fixed_limit: 450
add_new_nodes:
duration:
fixed_limit: 3200
decommission_nodes:
duration:
fixed_limit: 1800
replace_node:
duration:
fixed_limit: 3000

mixed:
_mgmt_repair_cli:
duration:
fixed_limit: 4200
_terminate_and_wait:
duration:
fixed_limit: 450
add_new_nodes:
duration:
fixed_limit: 2500
decommission_nodes:
duration:
fixed_limit: 1600
replace_node:
duration:
fixed_limit: 3000
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
latency_decorator_error_thresholds:
write:
_mgmt_repair_cli:
duration:
fixed_limit: 7200
_terminate_and_wait:
duration:
fixed_limit: 450
add_new_nodes:
duration:
fixed_limit: 4200
decommission_nodes:
duration:
fixed_limit: 5200
replace_node:
duration:
fixed_limit: 1800

read:
_mgmt_repair_cli:
duration:
fixed_limit: 2000
_terminate_and_wait:
duration:
fixed_limit: 450
add_new_nodes:
duration:
fixed_limit: 1800
decommission_nodes:
duration:
fixed_limit: 2500
replace_node:
duration:
fixed_limit: 1300

mixed:
_mgmt_repair_cli:
duration:
fixed_limit: 2500
_terminate_and_wait:
duration:
fixed_limit: 450
add_new_nodes:
duration:
fixed_limit: 2400
decommission_nodes:
duration:
fixed_limit: 2800
replace_node:
duration:
fixed_limit: 1500
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
latency_decorator_error_thresholds:
write:
unthrottled:
P90 write:
fixed_limit: null
P99 write:
fixed_limit: null
Throughput write:
best_pct: 5

read:
"150000":
P90 read:
fixed_limit: 1
P99 read:
fixed_limit: 1
"300000":
P90 read:
fixed_limit: 1
P99 read:
fixed_limit: 1
"450000":
P90 read:
fixed_limit: 1
P99 read:
fixed_limit: 3
unthrottled:
P90 read:
fixed_limit: null
P99 read:
fixed_limit: null
Throughput read:
best_pct: 5

mixed:
"50000":
P90 write:
fixed_limit: 1
P90 read:
fixed_limit: 1
P99 write:
fixed_limit: 3
P99 read:
fixed_limit: 3
"150000":
P90 write:
fixed_limit: 1
P90 read:
fixed_limit: 2
P99 write:
fixed_limit: 3
P99 read:
fixed_limit: 3
"300000":
P90 write:
fixed_limit: 3
P90 read:
fixed_limit: 3
P99 write:
fixed_limit: 5
P99 read:
fixed_limit: 5
unthrottled:
P90 write:
fixed_limit: null
P90 read:
fixed_limit: null
P99 write:
fixed_limit: null
P99 read:
fixed_limit: null
Throughput write:
best_pct: 5
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
latency_decorator_error_thresholds:
write:
unthrottled:
P90 write:
fixed_limit: null
P99 write:
fixed_limit: null
Throughput write:
best_pct: 5

read:
"150000":
P90 read:
fixed_limit: 1
P99 read:
fixed_limit: 1
"300000":
P90 read:
fixed_limit: 1
P99 read:
fixed_limit: 1
"450000":
P90 read:
fixed_limit: 1
P99 read:
fixed_limit: 5
unthrottled:
P90 read:
fixed_limit: null
P99 read:
fixed_limit: null
Throughput read:
best_pct: 5

mixed:
"50000":
P90 write:
fixed_limit: 1
P90 read:
fixed_limit: 1
P99 write:
fixed_limit: 3
P99 read:
fixed_limit: 3
"150000":
P90 write:
fixed_limit: 1
P90 read:
fixed_limit: 2
P99 write:
fixed_limit: 3
P99 read:
fixed_limit: 3
"300000":
P90 write:
fixed_limit: 3
P90 read:
fixed_limit: 3
P99 write:
fixed_limit: 5
P99 read:
fixed_limit: 5
unthrottled:
P90 write:
fixed_limit: null
P90 read:
fixed_limit: null
P99 write:
fixed_limit: null
P99 read:
fixed_limit: null
Throughput write:
best_pct: 5
24 changes: 24 additions & 0 deletions defaults/test_default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -268,3 +268,27 @@ zero_token_instance_type_db: 'i4i.large'
use_zero_nodes: false

latte_schema_parameters: {}

latency_decorator_error_thresholds:
write:
default:
P90 write:
fixed_limit: 5
P99 write:
fixed_limit: 10
read:
default:
P90 read:
fixed_limit: 5
P99 read:
fixed_limit: 10
mixed:
default:
P90 write:
fixed_limit: 5
P90 read:
fixed_limit: 5
P99 write:
fixed_limit: 10
P99 read:
fixed_limit: 10
7 changes: 7 additions & 0 deletions docs/configuration_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -2735,3 +2735,10 @@ Instance type for zero token node
AWS account id on behalf of which the test is run

**default:** N/A


## **latency_decorator_error_thresholds** / SCT_LATENCY_DECORATOR_ERROR_THRESHOLDS

Error thresholds for latency decorator. Defined by dict: {<write, read, mixed>: {<default|nemesis_name>:{<metric_name>: {<rule>: <value>}}}

**default:** {'write': {'default': {'P90 write': {'fixed_limit': 5}, 'P99 write': {'fixed_limit': 10}}}, 'read': {'default': {'P90 read': {'fixed_limit': 5}, 'P99 read': {'fixed_limit': 10}}}, 'mixed': {'default': {'P90 write': {'fixed_limit': 5}, 'P90 read': {'fixed_limit': 5}, 'P99 write': {'fixed_limit': 10}, 'P99 read': {'fixed_limit': 10}}}}
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ def lib = library identifier: 'sct@snapshot', retriever: legacySCM(scm)
perfRegressionParallelPipeline(
backend: "aws",
test_name: "performance_regression_test.PerformanceRegressionTest",
test_config: """["test-cases/performance/perf-regression-latency-650gb-with-nemesis.yaml", "configurations/disable_kms.yaml"]""",
test_config: """["test-cases/performance/perf-regression-latency-650gb-with-nemesis.yaml", "configurations/disable_kms.yaml", "configurations/performance/latency-decorator-error-thresholds-nemesis-ent-tablets.yaml"]""",
sub_tests: ["test_latency_write_with_nemesis", "test_latency_read_with_nemesis", "test_latency_mixed_with_nemesis"],
test_email_title: "latency during operations / tablets",
perf_extra_jobs_to_compare: "scylla-master/perf-regression/scylla-master-perf-regression-latency-650gb-with-nemesis-tablets",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ def lib = library identifier: 'sct@snapshot', retriever: legacySCM(scm)
perfRegressionParallelPipeline(
backend: "aws",
test_name: "performance_regression_test.PerformanceRegressionTest",
test_config: """["test-cases/performance/perf-regression-latency-650gb-with-nemesis.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_kms.yaml"]""",
test_config: """["test-cases/performance/perf-regression-latency-650gb-with-nemesis.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_kms.yaml", "configurations/performance/latency-decorator-error-thresholds-nemesis-ent-vnodes.yaml"]""",
sub_tests: ["test_latency_write_with_nemesis", "test_latency_read_with_nemesis", "test_latency_mixed_with_nemesis"],
perf_extra_jobs_to_compare: """["scylla-enterprise/scylla-enterprise-perf-regression-latency-650gb-with-nemesis","scylla-enterprise/perf-regression/scylla-enterprise-perf-regression-latency-650gb-with-nemesis"]""",
)
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ perfRegressionParallelPipeline(
backend: "aws",
aws_region: "us-east-1",
test_name: "performance_regression_gradual_grow_throughput.PerformanceRegressionPredefinedStepsTest",
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_speculative_retry.yaml"]''',
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_speculative_retry.yaml", "configurations/performance/latency-decorator-error-thresholds-steps-ent-vnodes.yaml"]''',
sub_tests: ["test_write_gradual_increase_load", "test_read_gradual_increase_load", "test_mixed_gradual_increase_load"],
)
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ perfRegressionParallelPipeline(
backend: "aws",
aws_region: "us-east-1",
test_name: "performance_regression_gradual_grow_throughput.PerformanceRegressionPredefinedStepsTest",
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/disable_speculative_retry.yaml"]''',
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/disable_speculative_retry.yaml", "configurations/performance/latency-decorator-error-thresholds-steps-ent-tablets.yaml"]''',
sub_tests: ["test_read_gradual_increase_load", "test_mixed_gradual_increase_load"],
)
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ perfRegressionParallelPipeline(
backend: "aws",
aws_region: "us-east-1",
test_name: "performance_regression_gradual_grow_throughput.PerformanceRegressionPredefinedStepsTest",
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_speculative_retry.yaml"]''',
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_speculative_retry.yaml", "configurations/performance/latency-decorator-error-thresholds-steps-ent-vnodes.yaml"]''',
sub_tests: ["test_read_gradual_increase_load", "test_mixed_gradual_increase_load"],
)
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ perfRegressionParallelPipeline(
backend: "aws",
aws_region: "us-east-1",
test_name: "performance_regression_gradual_grow_throughput.PerformanceRegressionPredefinedStepsTest",
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/disable_speculative_retry.yaml","configurations/perf-loaders-shard-aware-config.yaml"]''',
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/disable_speculative_retry.yaml","configurations/perf-loaders-shard-aware-config.yaml", "configurations/performance/latency-decorator-error-thresholds-steps-ent-tablets.yaml"]''',
sub_tests: ["test_write_gradual_increase_load"],
)
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ perfRegressionParallelPipeline(
backend: "aws",
aws_region: "us-east-1",
test_name: "performance_regression_gradual_grow_throughput.PerformanceRegressionPredefinedStepsTest",
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_speculative_retry.yaml","configurations/perf-loaders-shard-aware-config.yaml"]''',
test_config: '''["test-cases/performance/perf-regression-predefined-throughput-steps.yaml", "configurations/performance/cassandra_stress_gradual_load_steps_enterprise.yaml", "configurations/disable_kms.yaml", "configurations/tablets_disabled.yaml", "configurations/disable_speculative_retry.yaml","configurations/perf-loaders-shard-aware-config.yaml, "configurations/performance/latency-decorator-error-thresholds-steps-ent-vnodes.yaml"]''',
sub_tests: ["test_write_gradual_increase_load"],
)
15 changes: 8 additions & 7 deletions sdcm/argus_results.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,11 +154,13 @@ def submit_results_to_argus(argus_client: ArgusClient, result_table: GenericResu


def send_result_to_argus(argus_client: ArgusClient, workload: str, name: str, description: str, cycle: int, result: dict,
start_time: float = 0):
start_time: float = 0, error_thresholds: dict = None):
result_table = workload_to_table[workload]()
result_table.name = f"{workload} - {name} - latencies"
result_table.description = f"{workload} workload - {description}"
operation_error_thresholds = LATENCY_ERROR_THRESHOLDS.get(name, LATENCY_ERROR_THRESHOLDS["default"])
if error_thresholds:
error_thresholds = error_thresholds[workload]["default"] | error_thresholds[workload].get(name, {})
result_table.validation_rules = {metric: ValidationRule(**rules) for metric, rules in error_thresholds.items()}
try:
start_time = datetime.fromtimestamp(start_time or time.time(), tz=timezone.utc).strftime('%H:%M:%S')
except ValueError:
Expand All @@ -172,16 +174,15 @@ def send_result_to_argus(argus_client: ArgusClient, workload: str, name: str, de
result_table.add_result(column=f"P{percentile} {operation}",
row=f"Cycle #{cycle}",
value=value,
status=Status.PASS if value < operation_error_thresholds[f"percentile_{percentile}"] else Status.ERROR)
status=Status.UNSET)
if value := summary[operation.upper()].get("throughput", None):
# TODO: This column will be validated in the gradual test. `PASS` is temporary status. Should be handled later
result_table.add_result(column=f"Throughput {operation.lower()}",
row=f"Cycle #{cycle}",
value=value,
status=Status.UNSET)

result_table.add_result(column="duration", row=f"Cycle #{cycle}",
value=result["duration_in_sec"], status=Status.PASS)
value=result["duration_in_sec"], status=Status.UNSET)
try:
overview_screenshot = [screenshot for screenshot in result["screenshots"] if "overview" in screenshot][0]
result_table.add_result(column="Overview", row=f"Cycle #{cycle}",
Expand All @@ -205,10 +206,10 @@ def send_result_to_argus(argus_client: ArgusClient, workload: str, name: str, de
result_table.name = f"{workload} - {name} - stalls - {event_name}"
result_table.description = f"{event_name} event counts"
result_table.add_result(column="total", row=f"Cycle #{cycle}",
value=stall_stats["counter"], status=Status.PASS)
value=stall_stats["counter"], status=Status.UNSET)
for interval, value in stall_stats["ms"].items():
result_table.add_result(column=f"{interval}ms", row=f"Cycle #{cycle}",
value=value, status=Status.PASS)
value=value, status=Status.UNSET)
submit_results_to_argus(argus_client, result_table)


Expand Down
Loading

0 comments on commit bc3552a

Please sign in to comment.