-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bench web/scripts #5
base: BenchWeb/frameworks
Are you sure you want to change the base?
Conversation
Reviewer's Guide by SourceryThis pull request adds a comprehensive set of scripts and infrastructure for running web framework benchmarks, including test types, database support, load testing, and development environment setup. Class diagram for the new Benchmarker and related classesclassDiagram
class Benchmarker {
+Benchmarker(config)
+run() bool
+stop(signal, frame)
-__exit_test(success, prefix, file, message) bool
-__run_test(test, benchmark_log) bool
-__benchmark(framework_test, benchmark_log)
-__begin_logging(framework_test, test_type)
-__end_logging()
-__log_container_output(container, framework_test, test_type)
}
class DockerHelper {
+DockerHelper(benchmarker)
+build(test, build_log_dir) int
+run(test, run_log_dir) Container
+stop(containers)
+build_databases()
+start_database(database) Container
+build_wrk()
+test_client_connection(url) bool
+server_container_exists(container_id_or_name) bool
+benchmark(script, variables) Container
}
class Results {
+Results(benchmarker)
+parse(tests)
+parse_test(framework_test, test_type) dict
+parse_all(framework_test)
+write_intermediate(test_name, status_message)
+set_completion_time()
+upload()
+load()
+get_docker_stats_file(test_name, test_type) string
+get_raw_file(test_name, test_type) string
+get_stats_file(test_name, test_type) string
+report_verify_results(framework_test, test_type, result)
+report_benchmark_results(framework_test, test_type, results)
+finish()
}
Benchmarker --> DockerHelper
Benchmarker --> Results
DockerHelper --> Container
Results --> Benchmarker
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @gitworkflows - I've reviewed your changes and they look great!
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟡 Complexity: 1 issue found
- 🟡 Documentation: 3 issues found
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
6. Fix this `README.md` and open a pull request | ||
|
||
Starting on line 49 is your actual `README.md` that will sit with your test implementation. Update all the dummy values to their correct values so that when people visit your test in our Github repository, they will be greated with information on how your test implementation works and where to look for useful source code. | ||
|
||
After you have the real `README.md` file in place, delete everything above line 59 and you are ready to open a pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick (documentation): Typo: 'greated' should be 'greeted' in the text above
too old, download the newest `deb` directly). See | ||
[here](https://www.vagrantup.com/downloads.html) for downloads | ||
|
||
* **A CPU that can virtualize a 64-bit virtual OS**, because BW |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (documentation): Clarify that the 64-bit requirement applies to the host machine
Consider rephrasing to explicitly state this is a host machine requirement
* **A CPU that can virtualize a 64-bit virtual OS**, because BW | |
* **A host machine with a 64-bit CPU capable of virtualization**, because BW |
|
||
## Prerequisites | ||
|
||
* **A recent version of Vagrant**, like 1.6.3 (NOTE: `apt-get` is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (documentation): Consider making version requirement more specific
It would be helpful to specify the minimum required version of Vagrant
* **A recent version of Vagrant**, like 1.6.3 (NOTE: `apt-get` is | |
* **Vagrant 1.6.3 or higher** (NOTE: `apt-get` is |
with open(self.file, "w") as f: | ||
f.write(json.dumps(self.__to_jsonable(), indent=2)) | ||
|
||
def parse_test(self, framework_test, test_type): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider refactoring the parse_test() method into smaller focused helper methods to handle different parsing responsibilities.
The parse_test()
method has become difficult to maintain due to deep nesting and mixed responsibilities. Consider extracting the parsing logic into focused helper methods:
def parse_test(self, framework_test, test_type):
results = {'results': []}
stats = []
if not os.path.exists(self.get_raw_file(framework_test.name, test_type)):
return results
with open(self.get_raw_file(framework_test.name, test_type)) as raw_data:
is_warmup = True
current_result = None
for line in raw_data:
if self._is_new_test_block(line):
is_warmup = False
current_result = None
continue
if "Warmup" in line or "Primer" in line:
is_warmup = True
continue
if not is_warmup:
current_result = self._ensure_result_dict(current_result, results)
self._parse_metrics_line(line, current_result)
self._write_stats_file(framework_test, test_type, stats)
return results
def _parse_metrics_line(self, line, result):
if "Latency" in line:
self._parse_latency(line, result)
elif "requests in" in line:
self._parse_request_count(line, result)
elif "Socket errors" in line:
self._parse_socket_errors(line, result)
# etc for other metrics
This refactoring:
- Reduces nesting depth by extracting parsing logic
- Makes the code more maintainable by grouping related parsing logic
- Improves readability by giving meaningful names to parsing operations
- Preserves all existing functionality
The helper methods make the code's intent clearer while keeping the implementation details organized and accessible.
@@ -0,0 +1,25 @@ | |||
db = db.getSiblingDB('hello_world') | |||
db.world.drop() | |||
for (var i = 1; i <= 10000; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): Use const
or let
instead of var
. (avoid-using-var
)
Explanation
`const` is preferred as it ensures you cannot reassign references (which can lead to buggy and confusing code). `let` may be used if you need to reassign references - it's preferred to `var` because it is block- rather than function-scoped.From the Airbnb JavaScript Style Guide
self.failed = dict() | ||
self.verify = dict() | ||
for type in test_types: | ||
self.rawData[type] = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace dict()
with {}
(dict-literal
)
self.rawData[type] = dict() | |
self.rawData[type] = {} |
Explanation
The most concise and Pythonic way to create a dictionary is to use the{}
notation.
This fits in with the way we create dictionaries with items, saving a bit of
mental energy that might be taken up with thinking about two different ways of
creating dicts.
x = {"first": "thing"}
Doing things this way has the added advantage of being a nice little performance
improvement.
Here are the timings before and after the change:
$ python3 -m timeit "x = dict()"
5000000 loops, best of 5: 69.8 nsec per loop
$ python3 -m timeit "x = {}"
20000000 loops, best of 5: 29.4 nsec per loop
Similar reasoning and performance results hold for replacing list()
with []
.
''' | ||
Parses the given test and test_type from the raw_file. | ||
''' | ||
results = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace dict()
with {}
(dict-literal
)
results = dict() | |
results = {} |
Explanation
The most concise and Pythonic way to create a dictionary is to use the{}
notation.
This fits in with the way we create dictionaries with items, saving a bit of
mental energy that might be taken up with thinking about two different ways of
creating dicts.
x = {"first": "thing"}
Doing things this way has the added advantage of being a nice little performance
improvement.
Here are the timings before and after the change:
$ python3 -m timeit "x = dict()"
5000000 loops, best of 5: 69.8 nsec per loop
$ python3 -m timeit "x = {}"
20000000 loops, best of 5: 29.4 nsec per loop
Similar reasoning and performance results hold for replacing list()
with []
.
continue | ||
if not is_warmup: | ||
if rawData is None: | ||
rawData = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace dict()
with {}
(dict-literal
)
rawData = dict() | |
rawData = {} |
Explanation
The most concise and Pythonic way to create a dictionary is to use the{}
notation.
This fits in with the way we create dictionaries with items, saving a bit of
mental energy that might be taken up with thinking about two different ways of
creating dicts.
x = {"first": "thing"}
Doing things this way has the added advantage of being a nice little performance
improvement.
Here are the timings before and after the change:
$ python3 -m timeit "x = dict()"
5000000 loops, best of 5: 69.8 nsec per loop
$ python3 -m timeit "x = {}"
20000000 loops, best of 5: 29.4 nsec per loop
Similar reasoning and performance results hold for replacing list()
with []
.
the parent process' memory from the child process | ||
''' | ||
if framework_test.name not in self.verify.keys(): | ||
self.verify[framework_test.name] = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace dict()
with {}
(dict-literal
)
self.verify[framework_test.name] = dict() | |
self.verify[framework_test.name] = {} |
Explanation
The most concise and Pythonic way to create a dictionary is to use the{}
notation.
This fits in with the way we create dictionaries with items, saving a bit of
mental energy that might be taken up with thinking about two different ways of
creating dicts.
x = {"first": "thing"}
Doing things this way has the added advantage of being a nice little performance
improvement.
Here are the timings before and after the change:
$ python3 -m timeit "x = dict()"
5000000 loops, best of 5: 69.8 nsec per loop
$ python3 -m timeit "x = {}"
20000000 loops, best of 5: 29.4 nsec per loop
Similar reasoning and performance results hold for replacing list()
with []
.
the parent process' memory from the child process | ||
''' | ||
if test_type not in self.rawData.keys(): | ||
self.rawData[test_type] = dict() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion (code-quality): Replace dict()
with {}
(dict-literal
)
self.rawData[test_type] = dict() | |
self.rawData[test_type] = {} |
Explanation
The most concise and Pythonic way to create a dictionary is to use the{}
notation.
This fits in with the way we create dictionaries with items, saving a bit of
mental energy that might be taken up with thinking about two different ways of
creating dicts.
x = {"first": "thing"}
Doing things this way has the added advantage of being a nice little performance
improvement.
Here are the timings before and after the change:
$ python3 -m timeit "x = dict()"
5000000 loops, best of 5: 69.8 nsec per loop
$ python3 -m timeit "x = {}"
20000000 loops, best of 5: 29.4 nsec per loop
Similar reasoning and performance results hold for replacing list()
with []
.
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
PR Type
enhancement, configuration changes, documentation
Description
Benchmarker
,Results
,Metadata
,DockerHelper
, and variousTestType
classes.Changes walkthrough 📝
22 files
wrk.dockerfile
Add Dockerfile for wrk benchmarking tool setup
benchmarks/load-testing/wrk/wrk.dockerfile
wrk
benchmarking tool.postgres.dockerfile
Add Dockerfile for PostgreSQL database setup
infrastructure/docker/databases/postgres/postgres.dockerfile
mysql.dockerfile
Add Dockerfile for MySQL database setup
infrastructure/docker/databases/mysql/mysql.dockerfile
mongodb.dockerfile
Add Dockerfile for MongoDB database setup
infrastructure/docker/databases/mongodb/mongodb.dockerfile
create.js
Add MongoDB initialization script with sample data
infrastructure/docker/databases/mongodb/create.js
hello_world
database.world
andfortune
collections.bw-startup.sh
Add startup script for BW service with Docker setup
infrastructure/docker/services/bw-startup.sh
bootstrap.sh
Add bootstrap script for Vagrant environment setup
infrastructure/vagrant/bootstrap.sh
bw-shutdown.sh
Add shutdown script for BW service with Docker cleanup
infrastructure/docker/services/bw-shutdown.sh
entry.sh
Add entry script for Docker container execution
infrastructure/docker/entry.sh
config.sh
Add PostgreSQL configuration script with custom settings
infrastructure/docker/databases/postgres/config.sh
core.rb
Add Vagrant configuration for provisioning and provider setup
infrastructure/vagrant/core.rb
.siegerc
Add Siege configuration file for load testing
infrastructure/docker/databases/.siegerc
create-postgres.sql
Add PostgreSQL initialization script with sample data
infrastructure/docker/databases/postgres/create-postgres.sql
create.sql
Add MySQL initialization script with sample data
infrastructure/docker/databases/mysql/create.sql
Dockerfile
Add Dockerfile for BW service image with package installation
infrastructure/docker/Dockerfile
my.cnf
Add MySQL configuration file with custom settings
infrastructure/docker/databases/mysql/my.cnf
postgresql.conf
Add PostgreSQL configuration file with custom settings
infrastructure/docker/databases/postgres/postgresql.conf
bw.service
Add systemd service file for BW service
infrastructure/docker/services/bw.service
60-postgresql-shm.conf
Add shared memory configuration for PostgreSQL
infrastructure/docker/databases/postgres/60-postgresql-shm.conf
60-database-shm.conf
Add shared memory configuration for MySQL
infrastructure/docker/databases/mysql/60-database-shm.conf
Vagrantfile
Add Vagrantfile for VM provisioning and network setup
infrastructure/vagrant/Vagrantfile
benchmark_config.json
Add benchmark configuration JSON with placeholders
benchmarks/pre-benchmarks/benchmark_config.json
33 files
results.py
Implement Results class for benchmark data handling
utils/results.py
Results
class for handling benchmark results.verifications.py
Add verification functions for benchmark results
benchmarks/verifications.py
metadata.py
Implement Metadata class for benchmark test management
utils/metadata.py
Metadata
class for managing benchmark metadata.docker_helper.py
Implement DockerHelper class for Docker management
utils/docker_helper.py
DockerHelper
class for managing Docker operations.scaffolding.py
Add Scaffolding class for new benchmark test setup
utils/scaffolding.py
Scaffolding
class for initializing new benchmark tests.benchmarker.py
Implement Benchmarker class for executing benchmarks
benchmarks/benchmarker.py
Benchmarker
class for running and managing benchmarks.fortune_html_parser.py
Add FortuneHTMLParser class for HTML response validation
benchmarks/fortune/fortune_html_parser.py
FortuneHTMLParser
class for parsing HTML responses.run-tests.py
Add script for running benchmark tests with CLI
scripts/run-tests.py
time_logger.py
Implement TimeLogger class for execution time tracking
utils/time_logger.py
TimeLogger
class for tracking execution times.popen.py
Add PopenTimeout class for subprocess management with timeout
utils/popen.py
PopenTimeout
class for subprocess management with timeout.framework_test.py
Add FrameworkTest class for managing and verifying tests
benchmarks/framework_test.py
FrameworkTest
class to manage framework tests.and verifying URLs.
colorama
for colored output.abstract_test_type.py
Introduce AbstractTestType class for test type interface
benchmarks/abstract_test_type.py
AbstractTestType
for test types.requests.
logic.
fortune.py
Implement fortune test type with HTML parsing and verification
benchmarks/fortune/fortune.py
TestType
class for fortune test type.FortuneHTMLParser
.benchmark_config.py
Add BenchmarkConfig class for managing benchmark configurations
utils/benchmark_config.py
BenchmarkConfig
class to handle benchmark configuration.abstract_database.py
Introduce AbstractDatabase class for database operations
infrastructure/docker/databases/abstract_database.py
AbstractDatabase
class as a base for database operations.fail-detector.py
Create fail detector script for analyzing framework failures
scripts/fail-detector.py
db.py
Implement database test type with response validation
benchmarks/db/db.py
TestType
class for database test type.tests.
output_helper.py
Add logging utilities with color support and quiet mode
utils/output_helper.py
colorama
.QuietOutputStream
for conditional output suppression.postgres.py
Implement PostgreSQL database operations with connection handling
infrastructure/docker/databases/postgres/postgres.py
Database
class for PostgreSQL operations.reset.
mongodb.py
Implement MongoDB database operations with connection handling
infrastructure/docker/databases/mongodb/mongodb.py
Database
class for MongoDB operations.mysql.py
Implement MySQL database operations with connection handling
infrastructure/docker/databases/mysql/mysql.py
Database
class for MySQL operations.plaintext.py
Implement plaintext test type with response validation
benchmarks/plaintext/plaintext.py
TestType
class for plaintext test type.tests.
cached-query.py
Implement cached-query test type with response validation
benchmarks/cached-query/cached-query.py
TestType
class for cached-query test type.queries.
query.py
Implement query test type with response validation
benchmarks/query/query.py
TestType
class for query test type.update.py
Implement update test type with response validation
benchmarks/update/update.py
TestType
class for update test type.json.py
Implement JSON test type with response validation
benchmarks/json/json.py
TestType
class for JSON test type.responses.
__init__.py
Add dynamic loading and registration of database modules
infrastructure/docker/databases/init.py
audit.py
Add Audit class for framework test auditing
utils/audit.py
Audit
class for auditing framework tests.__init__.py
Add dynamic loading and registration of test type modules
benchmarks/init.py
pipeline.sh
Add pipeline benchmark script using wrk
benchmarks/load-testing/wrk/pipeline.sh
wrk
.concurrency.sh
Add concurrency benchmark script using wrk
benchmarks/load-testing/wrk/concurrency.sh
wrk
.query.sh
Add query benchmark script using wrk
benchmarks/load-testing/wrk/query.sh
wrk
.pipeline.lua
Add Lua script for wrk pipeline request handling
benchmarks/load-testing/wrk/pipeline.lua
wrk
to handle pipeline requests.1 files
custom_motd.sh
Add custom message of the day script for Vagrant
infrastructure/vagrant/custom_motd.sh
2 files
README.md
Add README for Vagrant development environment setup
infrastructure/vagrant/README.md
README.md
Add README template for new test implementations
benchmarks/pre-benchmarks/README.md