-
Notifications
You must be signed in to change notification settings - Fork 51
Standartize sanity failures when using exit_pytest_execution
#3008
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughReplaced the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3008 +/- ##
==========================================
- Coverage 98.28% 98.28% -0.01%
==========================================
Files 25 25
Lines 2157 2156 -1
==========================================
- Hits 2120 2119 -1
Misses 37 37
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
/build-and-push-container |
|
New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-3008 published |
…zation-tests into sanity-fail-file
|
/verified |
Test execution plan request detailsCRITICAL: You MUST respond with a review comment on the Files Changed tab, NOT as a regular PR comment. As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan. Analysis Requirements:
Your deliverable: Test Execution Plan
Guidelines:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test Execution Plan
Based on the analysis of this PR's changes to exit_pytest_execution and sanity failure handling:
Unit Tests (Required)
These directly test the modified functionality:
-
utilities/unittests/test_pytest_utils.py- Complete file verification needed as it tests all exit_pytest_execution scenarios including the new log_message parameter, filename handling, and directory resolution changes -
utilities/unittests/test_sanity.py- Complete file verification needed as it validates sanity check behavior with the updated exit_pytest_execution interface
Integration Tests (Recommended)
Tests that exercise sanity failure paths through the modified fixtures and exit conditions:
-
Tests using fixtures from
tests/conftest.py:- Any tests depending on
cnv_tests_utilities_namespacefixture - Any tests depending on
updated_default_storage_class_ocs_virtfixture - Any tests using
upgrade_bridge_marker_nadorupgrade_br1test_nadfixtures
- Any tests depending on
-
Tests using fixtures from
tests/virt/conftest.py:- Any tests depending on
virt_special_infra_sanityfixture
- Any tests depending on
-
Tests using fixtures from
tests/network/conftest.py:- Tests that verify nmstate running pods (network sanity checks)
-
Tests using fixtures from
tests/install_upgrade_operators/product_upgrade/conftest.py:- Any EUS upgrade tests that could trigger the EUS version sanity check
Smoke Test Coverage
The PR author (rnetser) has already verified with smoke tests across all storage class environments, as noted in the comment "smoke with all SCes + errors".
Note: The primary risk area is the error file location change and parameter renaming in exit_pytest_execution. The unit tests provide the most direct validation of these changes.
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 17 🏁 Script executed: Length of output: 0 @openshift-virtualization-qe-bot-3: I've posted the test execution plan as a review comment on the Files Changed tab as requested. The plan includes: Unit Tests (Required):
Integration Tests (Recommended):
The test execution plan is now available as a review comment for the team to use. 🧠 Learnings used |
Short description:
When
exit_pytest_executionis called, the error message is logged but it is also written to a file.Currently, the file is randomly written to the originating tests dir which led to the failure.
To be able to use the file for its original purpose (expose the failure in jenkins build info), the error file must be saved in a known, predefined location.
As the build's info cannot be too long, the error logged in the file is shorter than the one in the logs.
More details:
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for reviewer:
jira-ticket:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.