Skip to content

Conversation

@rnetser
Copy link
Collaborator

@rnetser rnetser commented Dec 4, 2025

Short description:

When exit_pytest_execution is called, the error message is logged but it is also written to a file.
Currently, the file is randomly written to the originating tests dir which led to the failure.
To be able to use the file for its original purpose (expose the failure in jenkins build info), the error file must be saved in a known, predefined location.

As the build's info cannot be too long, the error logged in the file is shorter than the one in the logs.

More details:
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for reviewer:
jira-ticket:

Summary by CodeRabbit

  • Chores
    • Improved failure reporting: consolidated logging into a dedicated log field, added explicit filenames and return codes for test failures, and moved exit logs to a new base diagnostic directory for easier artifact discovery.
  • Tests
    • Updated tests to assert the new logging field and the relocated diagnostic directory paths.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 4, 2025

Walkthrough

Replaced the message payload with log_message in exit reporting, added an optional message parameter, switched data-collector accessor to get_data_collector_base_directory, updated call sites/tests to the new signature, and added an informational log before file writes.

Changes

Cohort / File(s) Summary
Core utils
utilities/pytest_utils.py
Changed exit_pytest_execution signature to accept log_message (replacing message), added optional message param, switched from get_data_collector_dir to get_data_collector_base_directory, and updated internal logging and pytest exit reason handling.
Data collector
utilities/data_collector.py
Added an informational log in write_to_file that records the target file path before writing; updated import usage for the base-directory accessor.
Sanity helper
utilities/sanity.py
Updated call sites to pass log_message and an explicit message when invoking exit_pytest_execution.
Unit tests
utilities/unittests/test_pytest_utils.py, utilities/unittests/test_sanity.py
Replaced get_data_collector_dir with get_data_collector_base_directory in patches, renamed test variables to log_message, and updated assertions and expected file paths to the new base-directory layout.
Top-level tests — conftest call sites
tests/conftest.py, tests/install_upgrade_operators/product_upgrade/conftest.py, tests/network/conftest.py, tests/virt/conftest.py
Updated exit_pytest_execution call sites to use log_message instead of the previous message kwarg, and added explicit message, filename, and/or return_code arguments where present.
Misc test adjustments
tests/...
Minor test call-site updates to match the new exit_pytest_execution signature and related assertions.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Check consistent usage of log_message vs message across call sites.
  • Verify tests patch get_data_collector_base_directory and updated file paths.
  • Confirm exit_pytest_execution behavior (written file content and pytest.exit reason) matches expectations.
  • Review the new informational log in write_to_file for sensitive-path exposure.

Possibly related PRs

Suggested labels

verified, size/M, can-be-merged, approved-rnetser, lgtm-openshift-virtualization-qe-bot, commented-coderabbitai[bot]

Suggested reviewers

  • RoniKishner
  • geetikakay
  • dshchedr
  • vsibirsk
  • hmeir

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 77.27% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: standardizing how sanity failures are handled when calling exit_pytest_execution, which is the core focus of the PR.
Description check ✅ Passed The description provides a clear short summary and explanation of the problem and solution, but the template sections for issue references, reviewer notes, and jira-ticket are not completed.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9deea1e and fdd1045.

📒 Files selected for processing (1)
  • tests/conftest.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/conftest.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: can-be-merged
  • GitHub Check: can-be-merged
  • GitHub Check: build-container
  • GitHub Check: tox
  • GitHub Check: can-be-merged
  • GitHub Check: can-be-merged
  • GitHub Check: can-be-merged

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-virtualization-qe-bot

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: A tracking issue is created for this PR and will be closed when the PR is merged or closed
  • Pre-commit Checks: pre-commit runs automatically if .pre-commit-config.yaml exists
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status
  • /reprocess - Trigger complete PR workflow reprocessing (useful if webhook failed or configuration changed)

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /automerge - Enable automatic merging when all requirements are met (maintainers and approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. LGTM Count: Minimum 2 /lgtm from reviewers
  3. Status Checks: All required status checks must pass
  4. No Blockers: No WIP, hold, or conflict labels
  5. Verified: PR must be marked as verified (if verification is enabled)

📊 Review Process

Approvers and Reviewers

Approvers:

  • EdDev
  • dshchedr
  • myakove
  • rnetser
  • vsibirsk

Reviewers:

  • Anatw
  • EdDev
  • OhadRevah
  • RoniKishner
  • SamAlber
  • SiboWang1997
  • akri3i
  • albarker-rh
  • azhivovk
  • dshchedr
  • geetikakay
  • hmeir
  • jerry7z
  • kbidarkar
  • rlobillo
  • rnetser
  • servolkov
  • vsibirsk
  • yossisegev
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve
  • automerge

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is automatically removed on each new commit
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

@codecov
Copy link

codecov bot commented Dec 4, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.28%. Comparing base (f0cfa42) to head (fdd1045).
⚠️ Report is 22 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3008      +/-   ##
==========================================
- Coverage   98.28%   98.28%   -0.01%     
==========================================
  Files          25       25              
  Lines        2157     2156       -1     
==========================================
- Hits         2120     2119       -1     
  Misses         37       37              
Flag Coverage Δ
utilities 98.28% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@rnetser
Copy link
Collaborator Author

rnetser commented Dec 8, 2025

/build-and-push-container

@openshift-virtualization-qe-bot

New container for quay.io/openshift-cnv/openshift-virtualization-tests:pr-3008 published

@rnetser
Copy link
Collaborator Author

rnetser commented Dec 8, 2025

/verified

@openshift-virtualization-qe-bot-3
Copy link
Contributor

@coderabbitai

Test execution plan request details

CRITICAL: You MUST respond with a review comment on the Files Changed tab, NOT as a regular PR comment.
If it cannot be on the 1st line of the 1st file, add it to any other changed file.

As an expert software testing engineer, analyze all modified files in this PR and create a targeted test execution plan.
You will create a change request comment on the 1st line of the 1st file in the pr with the test execution plan.
If you fail to run or post a comment, retry.

Analysis Requirements:

  1. Examine code changes in each modified file
  2. Identify affected code paths, functions, and classes
  3. Analyze pytest-specific elements: fixtures (scope, dependencies), parametrization, markers, conftest changes
  4. Trace test dependencies through imports, shared utilities, and fixture inheritance
  5. Detect new tests introduced in the PR

Your deliverable:
Your change request comment will be based on the following requirements:

Test Execution Plan

  • path/to/test_file.py - When the entire test file needs verification
  • path/to/test_file.py::TestClass::test_method - When specific test(s) needed
  • path/to/test_file.py::test_function - When specific test(s) needed
  • -m marker - When specific marker(s) can be used to cover multiple cases.

Guidelines:

  • Include only tests directly affected by the changes
  • Use a full file path only if ALL tests in that file require verification
  • Use file path + test name if only specific tests are needed
  • If a test marker can cover multiple files/tests, provide the marker
  • Balance coverage vs over-testing - Keep descriptions minimal
  • Do not add a follow-up comment in the PR, only the change request one

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test Execution Plan

Based on the analysis of this PR's changes to exit_pytest_execution and sanity failure handling:

Unit Tests (Required)

These directly test the modified functionality:

  • utilities/unittests/test_pytest_utils.py - Complete file verification needed as it tests all exit_pytest_execution scenarios including the new log_message parameter, filename handling, and directory resolution changes

  • utilities/unittests/test_sanity.py - Complete file verification needed as it validates sanity check behavior with the updated exit_pytest_execution interface

Integration Tests (Recommended)

Tests that exercise sanity failure paths through the modified fixtures and exit conditions:

  • Tests using fixtures from tests/conftest.py:

    • Any tests depending on cnv_tests_utilities_namespace fixture
    • Any tests depending on updated_default_storage_class_ocs_virt fixture
    • Any tests using upgrade_bridge_marker_nad or upgrade_br1test_nad fixtures
  • Tests using fixtures from tests/virt/conftest.py:

    • Any tests depending on virt_special_infra_sanity fixture
  • Tests using fixtures from tests/network/conftest.py:

    • Tests that verify nmstate running pods (network sanity checks)
  • Tests using fixtures from tests/install_upgrade_operators/product_upgrade/conftest.py:

    • Any EUS upgrade tests that could trigger the EUS version sanity check

Smoke Test Coverage

The PR author (rnetser) has already verified with smoke tests across all storage class environments, as noted in the comment "smoke with all SCes + errors".

Note: The primary risk area is the error file location change and parameter renaming in exit_pytest_execution. The unit tests provide the most direct validation of these changes.

@coderabbitai
Copy link

coderabbitai bot commented Dec 8, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the list of changed files in the PR to determine which file to comment on
gh pr view 3008 --repo RedHatQE/openshift-virtualization-tests --json files --jq '.files[0].path'

Length of output: 17


🏁 Script executed:

#!/bin/bash
# Create a test execution plan review comment on the first changed file
# First, let's create the review comment body
cat << 'EOF' > /tmp/test_execution_plan.md
## Test Execution Plan

Based on the analysis of this PR's changes to `exit_pytest_execution` and sanity failure handling:

### Unit Tests (Required)
These directly test the modified functionality:

- `utilities/unittests/test_pytest_utils.py` - Complete file verification needed as it tests all exit_pytest_execution scenarios including the new log_message parameter, filename handling, and directory resolution changes

- `utilities/unittests/test_sanity.py` - Complete file verification needed as it validates sanity check behavior with the updated exit_pytest_execution interface

### Integration Tests (Recommended)
Tests that exercise sanity failure paths through the modified fixtures and exit conditions:

- Tests using fixtures from `tests/conftest.py`:
  - Any tests depending on `cnv_tests_utilities_namespace` fixture
  - Any tests depending on `updated_default_storage_class_ocs_virt` fixture
  - Any tests using `upgrade_bridge_marker_nad` or `upgrade_br1test_nad` fixtures

- Tests using fixtures from `tests/virt/conftest.py`:
  - Any tests depending on `virt_special_infra_sanity` fixture

- Tests using fixtures from `tests/network/conftest.py`:
  - Tests that verify nmstate running pods (network sanity checks)

- Tests using fixtures from `tests/install_upgrade_operators/product_upgrade/conftest.py`:
  - Any EUS upgrade tests that could trigger the EUS version sanity check

### Smoke Test Coverage
The PR author (rnetser) has already verified with smoke tests across all storage class environments, as noted in the comment "smoke with all SCes + errors".

**Note:** The primary risk area is the error file location change and parameter renaming in `exit_pytest_execution`. The unit tests provide the most direct validation of these changes.
EOF

# Now post the review comment on the first file (line 1) using gh CLI
gh pr review 3008 \
  --repo RedHatQE/openshift-virtualization-tests \
  --comment \
  --body "$(cat /tmp/test_execution_plan.md)"

Length of output: 0


@openshift-virtualization-qe-bot-3: I've posted the test execution plan as a review comment on the Files Changed tab as requested. The plan includes:

Unit Tests (Required):

  • utilities/unittests/test_pytest_utils.py
  • utilities/unittests/test_sanity.py

Integration Tests (Recommended):

  • Tests using fixtures from the modified conftest.py files across tests/, tests/virt/, tests/network/, and tests/install_upgrade_operators/product_upgrade/

The test execution plan is now available as a review comment for the team to use.


🧠 Learnings used
Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2469
File: utilities/sanity.py:139-142
Timestamp: 2025-11-08T07:36:57.616Z
Learning: In the openshift-virtualization-tests repository, user rnetser prefers to keep refactoring PRs (like PR #2469) strictly focused on moving/organizing code into more granular modules without adding new functionality, error handling, or behavioral changes. Such improvements should be handled in separate PRs.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-09-29T19:05:24.987Z
Learning: The test execution plan for PR #1904 focuses on cluster-type conditional logic where nmstate functionality is bypassed on cloud clusters (Azure/AWS) but fully functional on bare-metal/PSI clusters, requiring different test strategies for each environment type.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1236
File: conftest.py:539-557
Timestamp: 2025-06-18T13:26:04.504Z
Learning: In the openshift-virtualization-tests repository, PR #1236 intentionally limits error extraction to the setup phase only in the pytest_runtest_makereport hook. The scope is deliberately restricted to setup failures, not all test phases.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 2838
File: .github/workflows/net-utils-builder-staging.yml:37-37
Timestamp: 2025-11-25T01:56:54.902Z
Learning: In the openshift-virtualization-tests repository, when renaming container images that are built and used by GitHub Actions workflows, the changes must be done sequentially: first merge the workflow files (.github/workflows/) that update the image name in the CI/CD pipelines, then update the code references (like constants.py and manifest files) in a follow-up PR. This prevents the old workflow from running with mismatched image names during the transition.

Learnt from: vamsikrishna-siddu
Repo: RedHatQE/openshift-virtualization-tests PR: 2199
File: tests/storage/test_online_resize.py:108-113
Timestamp: 2025-09-28T14:43:07.181Z
Learning: In the openshift-virtualization-tests repo, PR #2199 depends on PR #2139 which adds architecture-specific OS_FLAVOR attributes to the Images.Cirros class (OS_FLAVOR_CIRROS for x86_64/ARM64, OS_FLAVOR_FEDORA for s390x), enabling conditional logic based on the underlying OS flavor in tests.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-09-29T19:05:24.987Z
Learning: For PR #1904 test execution, the critical validation point is test_connectivity_over_migration_between_localnet_vms which should fail gracefully on cloud clusters but pass on bare-metal/PSI clusters, representing the core nmstate conditional logic functionality.

Learnt from: vsibirsk
Repo: RedHatQE/openshift-virtualization-tests PR: 2045
File: tests/virt/cluster/vm_lifecycle/conftest.py:46-47
Timestamp: 2025-09-15T06:49:53.478Z
Learning: In the openshift-virtualization-tests repo, large fixture refactoring efforts like the golden image data source migration are handled incrementally by directory/team ownership. The virt/cluster directory is handled separately from virt/node, tests/infra, tests/storage, etc., with each area managed by relevant teams in follow-up PRs.

Learnt from: dshchedr
Repo: RedHatQE/openshift-virtualization-tests PR: 1716
File: tests/virt/conftest.py:289-297
Timestamp: 2025-08-09T01:52:26.683Z
Learning: When user dshchedr moves working code from one location to another in the openshift-virtualization-tests repository, they prefer not to modify it unless there's a real issue, maintaining the original implementation to avoid introducing unnecessary changes.

Learnt from: vamsikrishna-siddu
Repo: RedHatQE/openshift-virtualization-tests PR: 2199
File: tests/storage/test_online_resize.py:108-113
Timestamp: 2025-09-28T14:43:07.181Z
Learning: In the openshift-virtualization-tests repo, PR #2199 depends on PR #2139 which adds the OS_FLAVOR attribute to the Images.Cirros class, making Images.Cirros.OS_FLAVOR available for conditional logic in tests.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 1028
File: utilities/infra.py:1249-1254
Timestamp: 2025-08-06T13:57:51.928Z
Learning: User rnetser verified that all calls to get_infrastructure() function use the admin_client parameter, confirming that signature changes requiring this parameter don't cause breaking changes in the openshift-virtualization-tests codebase.

Learnt from: servolkov
Repo: RedHatQE/openshift-virtualization-tests PR: 1776
File: libs/net/node_network.py:25-31
Timestamp: 2025-08-20T23:43:28.117Z
Learning: In the RedHatQE/openshift-virtualization-tests project, servolkov's team always uses bare metal (BM) clusters with IPv4 setup in their testing environment, making defensive checks for IPv4 data presence potentially redundant in their networking code.

Learnt from: jpeimer
Repo: RedHatQE/openshift-virtualization-tests PR: 1160
File: tests/storage/storage_migration/test_mtc_storage_class_migration.py:165-176
Timestamp: 2025-06-17T07:45:37.776Z
Learning: In the openshift-virtualization-tests repository, user jpeimer prefers explicit fixture parameters over composite fixtures in test methods, even when there are many parameters, as they find this approach more readable and maintainable for understanding test dependencies.

Learnt from: akri3i
Repo: RedHatQE/openshift-virtualization-tests PR: 1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:24-64
Timestamp: 2025-06-23T19:28:20.281Z
Learning: In OpenShift Virtualization mass machine type transition tests, the machine type glob pattern "pc-q35-rhel8.*.*" is intentionally hard-coded in the kubevirt_api_lifecycle_automation_job as it's used only once for this specific test case, with plans to update it in the future if the job needs to support other machine types.

Learnt from: RoniKishner
Repo: RedHatQE/openshift-virtualization-tests PR: 1411
File: utilities/os_utils.py:246-279
Timestamp: 2025-07-22T17:13:59.166Z
Learning: In the RedHatQE/openshift-virtualization-tests repository, CentOS preferences follow the format "centos-stream<version>" (e.g., "centos-stream9", "centos-stream10"). The generate_instance_type_centos_os_matrix function correctly uses regex to extract numeric versions and constructs the latest version string in the same format as the input preferences for proper comparison.

Learnt from: akri3i
Repo: RedHatQE/openshift-virtualization-tests PR: 1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:83-97
Timestamp: 2025-06-23T19:19:31.961Z
Learning: In OpenShift Virtualization mass machine type transition tests, the kubevirt_api_lifecycle_automation_job requires cluster-admin privileges to function properly, as confirmed by the test maintainer akri3i.

Learnt from: yossisegev
Repo: RedHatQE/openshift-virtualization-tests PR: 0
File: :0-0
Timestamp: 2025-12-07T14:51:53.484Z
Learning: In the openshift-virtualization-tests repository, the team has decided to avoid using predefined time constants (like TIMEOUT_2MIN, TIMEOUT_5SEC) and prefers using explicit numeric values for timeout parameters.

Learnt from: rnetser
Repo: RedHatQE/openshift-virtualization-tests PR: 2701
File: .github/scripts/coderabbit_workflow.py:45-46
Timestamp: 2025-11-27T11:45:54.224Z
Learning: In the openshift-virtualization-tests repository, when using the PyGithub library or similar API clients, rnetser prefers to rely on API-level validation rather than adding redundant client-side input validation, since the API will handle validation and raise appropriate errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants