Locust is a python based load testing framework. Documentation can be found here: Documentation
Since we are working with a docker-compose setup, you just need to have docker installed in your machine.
You can run distributed tests with docker-compose and easily scale more workers: Distributed tests
- Check if the .env file contains the correct configuration for your test run
- Run
docker-compose up --scale worker=4
with as many workers as you want - After your test, run
docker-compose down
or just pressCtrl + C
if you are running the docker-compose in interactive mode
Using Github Actions, it is possible to run the performance tests directly from the repository. The pipeline is configured to receive some parameters, including selection of test script and number of workers, what makes it flexible for different test runs.
__init__.py
: Here we have the setup and teardown for our tests. You can insert here anything that needs to be executed at the beginning and at the end of the test execution.env
: This is the general config file. You can specify some parameter values like host, locust file, etc.data/*.py
: We separated the query and filter values into separated test data files, in order to make the tests more readable and maintanabletestConfig/*.py
: This folder contains the ramp and KPI configs for each specific performance testcommon/*.py
: Here we have shared functions and values that are applied in multiple test fileskpi_checker.py
: This file contains auxiliary methods necessary for the automated KPI validationload_test.py
: These are our performance test files. They include our requests and other additional informationload_test_custom_shape.py
: This is the custom ramp configuration, that is imported and used by the different load tests
We are using a config that enable us to provide a few KPIs goals for a tests excution through environment variables. Right now, we have the possibility to use 5 KPIs:
- RPS
- Maximum fail ratio
- 90% response time percentile
- 95% response time percentile
- 99% response time percentile
The values are passed to locust through command line. Since we are using docker-compose setup, you can check and edit the KPI goals using the .env file.
When the results don't achieve the KPI goals, we have messages returned in the terminal and a different exit code (code 3) of the script, that can be used to integrate with other tools, like for example Jenkins:
master_1 | [2021-10-07 21:35:01,009] dc53ec78188f/INFO/root: CHECK FAILED: 90% percentile was 300.0 ms (threshold 200.0 ms)
master_1 | [2021-10-07 21:35:01,009] dc53ec78188f/INFO/root: CHECK SUCCESSFUL: 95% percentile was 360.0 ms (threshold 500.0) ms
master_1 | [2021-10-07 21:35:01,009] dc53ec78188f/INFO/root: CHECK SUCCESSFUL: 99% percentile was 910.0 ms (threshold 1000.0) ms
master_1 | [2021-10-07 21:35:01,010] dc53ec78188f/INFO/root: CHECK FAILED: total rps was 3.6 (threshold 10.0)
master_1 | [2021-10-07 21:35:01,010] dc53ec78188f/INFO/locust.main: Shutting down (exit code 3), bye.
If necessary, it is possible to also add more KPIs to be validated. We just need to get it from the metrics that locust provide us: Locust_stats
The KPIs validation is possible due to a locust plugin. You can get more info about that in this link: Locust_plugins
The test report is automatically generated after every test run. When running from Github Actions, it will be present in the archived artifacts.