Skip to content

[WIP]Test for kubevirt_rest_client_requests_total #1330

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

OhadRevah
Copy link
Contributor

@OhadRevah OhadRevah commented Jun 30, 2025

Short description:

test for kubevirt_rest_client_requests_total metric.

More details:
What this PR does / why we need it:
Which issue(s) this PR fixes:
Special notes for reviewer:
jira-ticket:

https://issues.redhat.com/browse/CNV-54804

Summary by CodeRabbit

  • New Features

    • Added a new test to verify that deleting a running virtual machine correctly increments the related Prometheus metric.
  • Refactor

    • Improved test parametrization by generalizing the fixture for retrieving initial metric values, allowing for more flexible and reusable metric tests.
  • Tests

    • Updated existing metric tests to use the new generalized fixture for initial metric values.
    • Introduced a new constant for a specific Prometheus metric used in testing.

test for kubevirt_rest_client_requests_total metric.
Copy link

coderabbitai bot commented Jun 30, 2025

Walkthrough

The changes introduce a generic fixture for retrieving initial metric values, replacing a metric-specific fixture. A new Prometheus metric constant is added, and a corresponding test is introduced to validate this metric after VM deletion. Existing tests are refactored to use the new generic fixture and parameterization approach.

Changes

Files/Groups Change Summary
tests/observability/metrics/conftest.py Removed metric-specific fixture; added a generic metric_initial_value fixture for dynamic metric retrieval.
tests/observability/metrics/constants.py Added constant: KUBEVIRT_REST_CLIENT_REQUESTS_TOTAL_WITH_VERB_AND_RESOURCE.
tests/observability/metrics/test_cdi_metrics.py Refactored test to use parameterized indirect fixture for metric initial value.
tests/observability/metrics/test_vms_metrics.py Added new test class and method to validate the new REST client requests metric after VM deletion.

Possibly related PRs

Suggested labels

verified, size/S, can-be-merged, branch-main, tox:verify-tc-requirement-polarion:passed, lgtm-hmeir, lgtm-rnetser, approved-rnetser, lgtm-openshift-virtualization-qe-bot

Suggested reviewers

  • RoniKishner
  • rnetser
  • dshchedr
  • vsibirsk
  • hmeir

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fc570ec and c2cd98c.

📒 Files selected for processing (4)
  • tests/observability/metrics/conftest.py (1 hunks)
  • tests/observability/metrics/constants.py (1 hunks)
  • tests/observability/metrics/test_cdi_metrics.py (1 hunks)
  • tests/observability/metrics/test_vms_metrics.py (2 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:83-97
Timestamp: 2025-06-23T19:19:31.961Z
Learning: In OpenShift Virtualization mass machine type transition tests, the kubevirt_api_lifecycle_automation_job requires cluster-admin privileges to function properly, as confirmed by the test maintainer akri3i.
Learnt from: rnetser
PR: RedHatQE/openshift-virtualization-tests#1236
File: conftest.py:539-557
Timestamp: 2025-06-18T13:26:04.504Z
Learning: In the openshift-virtualization-tests repository, PR #1236 intentionally limits error extraction to the setup phase only in the pytest_runtest_makereport hook. The scope is deliberately restricted to setup failures, not all test phases.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1172
File: tests/observability/metrics/test_migration_metrics.py:36-41
Timestamp: 2025-06-10T11:41:36.366Z
Learning: For transient migration metrics like KUBEVIRT_VMI_MIGRATIONS_IN_SCHEDULING_PHASE and KUBEVIRT_VMI_MIGRATIONS_IN_RUNNING_PHASE in OpenShift Virtualization tests, use check_times=1 with wait_for_expected_metric_value_sum() to capture the metric value immediately without requiring multiple consecutive matches, as these metrics are raised for very short periods and may decrease while checking again.
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:24-64
Timestamp: 2025-06-23T19:28:20.281Z
Learning: In OpenShift Virtualization mass machine type transition tests, the 2-minute timeout (TIMEOUT_2MIN) is sufficient for the kubevirt_api_lifecycle_automation_job because it only tests with one VM at a time, not multiple VMs simultaneously.
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:142-149
Timestamp: 2025-06-23T19:18:12.275Z
Learning: In OpenShift Virtualization machine type transition tests, the kubevirt_api_lifecycle_automation_job updates VM machine types to the latest version based on a MACHINE_TYPE_GLOB pattern, and subsequent fixtures may intentionally revert the machine type to test bidirectional transition behavior.
Learnt from: jpeimer
PR: RedHatQE/openshift-virtualization-tests#1160
File: tests/storage/storage_migration/test_mtc_storage_class_migration.py:165-176
Timestamp: 2025-06-17T07:45:37.776Z
Learning: In the openshift-virtualization-tests repository, user jpeimer prefers explicit fixture parameters over composite fixtures in test methods, even when there are many parameters, as they find this approach more readable and maintainable for understanding test dependencies.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:21:34.315Z
Learning: In tests/observability/metrics/conftest.py, when creating fixtures that modify shared Windows VM state (like changing nodeSelector), prefer using function scope rather than class scope to ensure ResourceEditor context managers properly restore the VM state after each test, maintaining test isolation while still reusing expensive Windows VM fixtures.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/test_vms_metrics.py:129-137
Timestamp: 2025-06-18T09:19:05.769Z
Learning: For Windows VM testing in tests/observability/metrics/test_vms_metrics.py, it's acceptable to have more fixture parameters than typical pylint recommendations when reusing expensive Windows VM fixtures for performance. Windows VMs take a long time to deploy, so reusing fixtures like windows_vm_for_test and adding labels via windows_vm_with_low_bandwidth_migration_policy is preferred over creating separate fixtures that would require additional VM deployments.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1178-1180
Timestamp: 2025-06-18T09:15:25.436Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_vm_metric_1` fixture is intentionally designed to stop the VM and leave it in that state - it does not need to restart the VM afterward as this is the desired behavior for the tests that use it.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:31:06.311Z
Learning: In tests/observability/metrics/conftest.py, ResourceEditor context managers automatically restore VM configuration when the context exits, including nodeSelector patches. The fixture pattern with `with ResourceEditor(patches={vm: {...}})` followed by `yield` properly restores the VM to its original state without requiring manual teardown logic.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1183-1186
Timestamp: 2025-06-22T13:47:35.014Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_windows_vm` fixture is designed to temporarily stop the Windows VM for a test, then restart it during teardown (after yield) because the Windows VM is module-scoped and needs to be available for other tests that depend on it being in a running state.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#584
File: tests/observability/metrics/test_network_metrics.py:62-76
Timestamp: 2025-05-27T11:44:14.859Z
Learning: The windows_vm_for_test fixture in tests/observability/metrics/conftest.py does not have a request argument, so it cannot be parametrized using @pytest.mark.parametrize with indirect=True. This is different from vm_for_test fixture which accepts parameters through parametrization.
Learnt from: jpeimer
PR: RedHatQE/openshift-virtualization-tests#954
File: tests/storage/storage_migration/conftest.py:264-269
Timestamp: 2025-05-28T10:50:56.122Z
Learning: In the openshift-virtualization-tests codebase, cleanup pytest fixtures like `deleted_old_dvs_of_stopped_vms`, `deleted_completed_virt_launcher_source_pod`, and `deleted_old_dvs_of_online_vms` do not require yield statements. These fixtures perform cleanup operations and work correctly without yielding values.
tests/observability/metrics/test_cdi_metrics.py (10)
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:21:34.315Z
Learning: In tests/observability/metrics/conftest.py, when creating fixtures that modify shared Windows VM state (like changing nodeSelector), prefer using function scope rather than class scope to ensure ResourceEditor context managers properly restore the VM state after each test, maintaining test isolation while still reusing expensive Windows VM fixtures.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#584
File: tests/observability/metrics/test_network_metrics.py:62-76
Timestamp: 2025-05-27T11:44:14.859Z
Learning: The windows_vm_for_test fixture in tests/observability/metrics/conftest.py does not have a request argument, so it cannot be parametrized using @pytest.mark.parametrize with indirect=True. This is different from vm_for_test fixture which accepts parameters through parametrization.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1178-1180
Timestamp: 2025-06-18T09:15:25.436Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_vm_metric_1` fixture is intentionally designed to stop the VM and leave it in that state - it does not need to restart the VM afterward as this is the desired behavior for the tests that use it.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/test_vms_metrics.py:129-137
Timestamp: 2025-06-18T09:19:05.769Z
Learning: For Windows VM testing in tests/observability/metrics/test_vms_metrics.py, it's acceptable to have more fixture parameters than typical pylint recommendations when reusing expensive Windows VM fixtures for performance. Windows VMs take a long time to deploy, so reusing fixtures like windows_vm_for_test and adding labels via windows_vm_with_low_bandwidth_migration_policy is preferred over creating separate fixtures that would require additional VM deployments.
Learnt from: jpeimer
PR: RedHatQE/openshift-virtualization-tests#1160
File: tests/storage/storage_migration/test_mtc_storage_class_migration.py:165-176
Timestamp: 2025-06-17T07:45:37.776Z
Learning: In the openshift-virtualization-tests repository, user jpeimer prefers explicit fixture parameters over composite fixtures in test methods, even when there are many parameters, as they find this approach more readable and maintainable for understanding test dependencies.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:31:06.311Z
Learning: In tests/observability/metrics/conftest.py, ResourceEditor context managers automatically restore VM configuration when the context exits, including nodeSelector patches. The fixture pattern with `with ResourceEditor(patches={vm: {...}})` followed by `yield` properly restores the VM to its original state without requiring manual teardown logic.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1172
File: tests/observability/metrics/test_migration_metrics.py:36-41
Timestamp: 2025-06-10T11:41:36.366Z
Learning: For transient migration metrics like KUBEVIRT_VMI_MIGRATIONS_IN_SCHEDULING_PHASE and KUBEVIRT_VMI_MIGRATIONS_IN_RUNNING_PHASE in OpenShift Virtualization tests, use check_times=1 with wait_for_expected_metric_value_sum() to capture the metric value immediately without requiring multiple consecutive matches, as these metrics are raised for very short periods and may decrease while checking again.
Learnt from: jpeimer
PR: RedHatQE/openshift-virtualization-tests#954
File: tests/storage/storage_migration/conftest.py:264-269
Timestamp: 2025-05-28T10:50:56.122Z
Learning: In the openshift-virtualization-tests codebase, cleanup pytest fixtures like `deleted_old_dvs_of_stopped_vms`, `deleted_completed_virt_launcher_source_pod`, and `deleted_old_dvs_of_online_vms` do not require yield statements. These fixtures perform cleanup operations and work correctly without yielding values.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1183-1186
Timestamp: 2025-06-22T13:47:35.014Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_windows_vm` fixture is designed to temporarily stop the Windows VM for a test, then restart it during teardown (after yield) because the Windows VM is module-scoped and needs to be available for other tests that depend on it being in a running state.
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/test_mass_machine_type_transition.py:97-104
Timestamp: 2025-06-23T19:24:28.327Z
Learning: In OpenShift Virtualization machine type transition tests, the test_machine_type_transition_without_restart method with restart_required=false parameter validates that VM machine types do NOT change when the lifecycle job runs with restart disabled, so the assertion should check against the original machine type rather than the target machine type.
tests/observability/metrics/test_vms_metrics.py (10)
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:21:34.315Z
Learning: In tests/observability/metrics/conftest.py, when creating fixtures that modify shared Windows VM state (like changing nodeSelector), prefer using function scope rather than class scope to ensure ResourceEditor context managers properly restore the VM state after each test, maintaining test isolation while still reusing expensive Windows VM fixtures.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1178-1180
Timestamp: 2025-06-18T09:15:25.436Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_vm_metric_1` fixture is intentionally designed to stop the VM and leave it in that state - it does not need to restart the VM afterward as this is the desired behavior for the tests that use it.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:31:06.311Z
Learning: In tests/observability/metrics/conftest.py, ResourceEditor context managers automatically restore VM configuration when the context exits, including nodeSelector patches. The fixture pattern with `with ResourceEditor(patches={vm: {...}})` followed by `yield` properly restores the VM to its original state without requiring manual teardown logic.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/test_vms_metrics.py:129-137
Timestamp: 2025-06-18T09:19:05.769Z
Learning: For Windows VM testing in tests/observability/metrics/test_vms_metrics.py, it's acceptable to have more fixture parameters than typical pylint recommendations when reusing expensive Windows VM fixtures for performance. Windows VMs take a long time to deploy, so reusing fixtures like windows_vm_for_test and adding labels via windows_vm_with_low_bandwidth_migration_policy is preferred over creating separate fixtures that would require additional VM deployments.
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:142-149
Timestamp: 2025-06-23T19:18:12.275Z
Learning: In OpenShift Virtualization machine type transition tests, the kubevirt_api_lifecycle_automation_job updates VM machine types to the latest version based on a MACHINE_TYPE_GLOB pattern, and subsequent fixtures may intentionally revert the machine type to test bidirectional transition behavior.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/test_vms_metrics.py:144-149
Timestamp: 2025-06-09T08:57:47.070Z
Learning: Windows VMs in the test suite transition through the starting state more quickly than Linux VMs. When testing the kubevirt_vm_starting_status_last_transition_timestamp_seconds metric for Windows VMs, use max_over_time() with a time window (e.g., 10 minutes) to capture the metric value, because by the time the test runs on a running Windows VM, the current metric value would be 0.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#584
File: tests/observability/metrics/test_network_metrics.py:62-76
Timestamp: 2025-05-27T11:44:14.859Z
Learning: The windows_vm_for_test fixture in tests/observability/metrics/conftest.py does not have a request argument, so it cannot be parametrized using @pytest.mark.parametrize with indirect=True. This is different from vm_for_test fixture which accepts parameters through parametrization.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1183-1186
Timestamp: 2025-06-22T13:47:35.014Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_windows_vm` fixture is designed to temporarily stop the Windows VM for a test, then restart it during teardown (after yield) because the Windows VM is module-scoped and needs to be available for other tests that depend on it being in a running state.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1172
File: tests/observability/metrics/test_migration_metrics.py:36-41
Timestamp: 2025-06-10T11:41:36.366Z
Learning: For transient migration metrics like KUBEVIRT_VMI_MIGRATIONS_IN_SCHEDULING_PHASE and KUBEVIRT_VMI_MIGRATIONS_IN_RUNNING_PHASE in OpenShift Virtualization tests, use check_times=1 with wait_for_expected_metric_value_sum() to capture the metric value immediately without requiring multiple consecutive matches, as these metrics are raised for very short periods and may decrease while checking again.
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/test_mass_machine_type_transition.py:97-104
Timestamp: 2025-06-23T19:24:28.327Z
Learning: In OpenShift Virtualization machine type transition tests, the test_machine_type_transition_without_restart method with restart_required=false parameter validates that VM machine types do NOT change when the lifecycle job runs with restart disabled, so the assertion should check against the original machine type rather than the target machine type.
tests/observability/metrics/conftest.py (11)
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:21:34.315Z
Learning: In tests/observability/metrics/conftest.py, when creating fixtures that modify shared Windows VM state (like changing nodeSelector), prefer using function scope rather than class scope to ensure ResourceEditor context managers properly restore the VM state after each test, maintaining test isolation while still reusing expensive Windows VM fixtures.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1178-1180
Timestamp: 2025-06-18T09:15:25.436Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_vm_metric_1` fixture is intentionally designed to stop the VM and leave it in that state - it does not need to restart the VM afterward as this is the desired behavior for the tests that use it.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1065-1077
Timestamp: 2025-06-18T09:31:06.311Z
Learning: In tests/observability/metrics/conftest.py, ResourceEditor context managers automatically restore VM configuration when the context exits, including nodeSelector patches. The fixture pattern with `with ResourceEditor(patches={vm: {...}})` followed by `yield` properly restores the VM to its original state without requiring manual teardown logic.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/conftest.py:1183-1186
Timestamp: 2025-06-22T13:47:35.014Z
Learning: In tests/observability/metrics/conftest.py, the `stopped_windows_vm` fixture is designed to temporarily stop the Windows VM for a test, then restart it during teardown (after yield) because the Windows VM is module-scoped and needs to be available for other tests that depend on it being in a running state.
Learnt from: jpeimer
PR: RedHatQE/openshift-virtualization-tests#954
File: tests/storage/storage_migration/conftest.py:264-269
Timestamp: 2025-05-28T10:50:56.122Z
Learning: In the openshift-virtualization-tests codebase, cleanup pytest fixtures like `deleted_old_dvs_of_stopped_vms`, `deleted_completed_virt_launcher_source_pod`, and `deleted_old_dvs_of_online_vms` do not require yield statements. These fixtures perform cleanup operations and work correctly without yielding values.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#584
File: tests/observability/metrics/test_network_metrics.py:62-76
Timestamp: 2025-05-27T11:44:14.859Z
Learning: The windows_vm_for_test fixture in tests/observability/metrics/conftest.py does not have a request argument, so it cannot be parametrized using @pytest.mark.parametrize with indirect=True. This is different from vm_for_test fixture which accepts parameters through parametrization.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1166
File: tests/observability/metrics/test_vms_metrics.py:129-137
Timestamp: 2025-06-18T09:19:05.769Z
Learning: For Windows VM testing in tests/observability/metrics/test_vms_metrics.py, it's acceptable to have more fixture parameters than typical pylint recommendations when reusing expensive Windows VM fixtures for performance. Windows VMs take a long time to deploy, so reusing fixtures like windows_vm_for_test and adding labels via windows_vm_with_low_bandwidth_migration_policy is preferred over creating separate fixtures that would require additional VM deployments.
Learnt from: jpeimer
PR: RedHatQE/openshift-virtualization-tests#1160
File: tests/storage/storage_migration/test_mtc_storage_class_migration.py:165-176
Timestamp: 2025-06-17T07:45:37.776Z
Learning: In the openshift-virtualization-tests repository, user jpeimer prefers explicit fixture parameters over composite fixtures in test methods, even when there are many parameters, as they find this approach more readable and maintainable for understanding test dependencies.
Learnt from: OhadRevah
PR: RedHatQE/openshift-virtualization-tests#1172
File: tests/observability/metrics/test_migration_metrics.py:36-41
Timestamp: 2025-06-10T11:41:36.366Z
Learning: For transient migration metrics like KUBEVIRT_VMI_MIGRATIONS_IN_SCHEDULING_PHASE and KUBEVIRT_VMI_MIGRATIONS_IN_RUNNING_PHASE in OpenShift Virtualization tests, use check_times=1 with wait_for_expected_metric_value_sum() to capture the metric value immediately without requiring multiple consecutive matches, as these metrics are raised for very short periods and may decrease while checking again.
Learnt from: dshchedr
PR: RedHatQE/openshift-virtualization-tests#890
File: tests/virt/node/descheduler/conftest.py:146-148
Timestamp: 2025-06-13T01:08:18.579Z
Learning: The fixture `vms_orig_nodes_before_node_drain` in tests/virt/node/descheduler/conftest.py is intentionally kept with the “node_drain” suffix because it represents the state directly before a node drain step; future refactor suggestions should preserve this name unless requirements change.
Learnt from: akri3i
PR: RedHatQE/openshift-virtualization-tests#1210
File: tests/virt/cluster/general/mass_machine_type_transition_tests/conftest.py:142-149
Timestamp: 2025-06-23T19:18:12.275Z
Learning: In OpenShift Virtualization machine type transition tests, the kubevirt_api_lifecycle_automation_job updates VM machine types to the latest version based on a MACHINE_TYPE_GLOB pattern, and subsequent fixtures may intentionally revert the machine type to test bidirectional transition behavior.
🧬 Code Graph Analysis (3)
tests/observability/metrics/test_cdi_metrics.py (3)
tests/conftest.py (1)
  • prometheus (1559-1563)
tests/observability/metrics/conftest.py (2)
  • metric_initial_value (1166-1167)
  • created_fake_data_volume_resource (1150-1162)
tests/observability/utils.py (1)
  • validate_metrics_value (20-43)
tests/observability/metrics/test_vms_metrics.py (4)
tests/conftest.py (2)
  • prometheus (1559-1563)
  • running_metric_vm (2288-2297)
tests/observability/metrics/conftest.py (1)
  • metric_initial_value (1166-1167)
utilities/storage.py (1)
  • delete (539-544)
tests/observability/utils.py (1)
  • validate_metrics_value (20-43)
tests/observability/metrics/conftest.py (2)
tests/conftest.py (1)
  • prometheus (1559-1563)
utilities/monitoring.py (1)
  • get_metrics_value (169-175)
🪛 Pylint (3.3.7)
tests/observability/metrics/test_vms_metrics.py

[refactor] 600-600: Too few public methods (1/2)

(R0903)

⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: can-be-merged
  • GitHub Check: can-be-merged
  • GitHub Check: conventional-title
  • GitHub Check: can-be-merged
  • GitHub Check: tox
  • GitHub Check: build-container
🔇 Additional comments (5)
tests/observability/metrics/constants.py (1)

84-86: LGTM! Well-structured metric constant.

The new constant follows established naming conventions and properly defines the Prometheus metric with appropriate labels for tracking REST client DELETE requests on virtualmachineinstances.

tests/observability/metrics/conftest.py (1)

1166-1167: Excellent refactoring to a generic metric fixture.

The new metric_initial_value fixture properly replaces hardcoded metric fixtures with a reusable approach using parameterization. The implementation correctly uses request.param for dynamic metric name injection and maintains the expected integer return type.

tests/observability/metrics/test_vms_metrics.py (2)

18-18: Import addition looks good.

The import of the new constant is properly placed and follows the existing import structure.


600-616: Well-implemented test for REST client metrics.

The test properly validates that the kubevirt_rest_client_requests_total metric increments when a VM is deleted. The implementation follows established patterns:

  • Uses indirect parameterization with the new generic metric_initial_value fixture
  • Performs the action (VM deletion) and validates the metric increment
  • Properly uses validate_metrics_value utility for assertion

The static analysis hint about "too few public methods" can be safely ignored as single-method test classes are common and acceptable in pytest.

tests/observability/metrics/test_cdi_metrics.py (1)

72-82: Excellent refactoring to use the generic fixture.

The test has been successfully updated to use the new metric_initial_value fixture with parameterization. The refactoring:

  • Maintains the same test logic and Polarion marker
  • Uses proper indirect parameterization pattern
  • Aligns with the broader effort to generalize metric testing infrastructure
  • Preserves the original functionality while improving reusability
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@openshift-virtualization-qe-bot

Report bugs in Issues

Welcome! 🎉

This pull request will be automatically processed with the following features:

🔄 Automatic Actions

  • Reviewer Assignment: Reviewers are automatically assigned based on the OWNERS file in the repository root
  • Size Labeling: PR size labels (XS, S, M, L, XL, XXL) are automatically applied based on changes
  • Issue Creation: A tracking issue is created for this PR and will be closed when the PR is merged or closed
  • Pre-commit Checks: pre-commit runs automatically if .pre-commit-config.yaml exists
  • Branch Labeling: Branch-specific labels are applied to track the target branch
  • Auto-verification: Auto-verified users have their PRs automatically marked as verified

📋 Available Commands

PR Status Management

  • /wip - Mark PR as work in progress (adds WIP: prefix to title)
  • /wip cancel - Remove work in progress status
  • /hold - Block PR merging (approvers only)
  • /hold cancel - Unblock PR merging
  • /verified - Mark PR as verified
  • /verified cancel - Remove verification status

Review & Approval

  • /lgtm - Approve changes (looks good to me)
  • /approve - Approve PR (approvers only)
  • /automerge - Enable automatic merging when all requirements are met (maintainers and approvers only)
  • /assign-reviewers - Assign reviewers based on OWNERS file
  • /assign-reviewer @username - Assign specific reviewer
  • /check-can-merge - Check if PR meets merge requirements

Testing & Validation

  • /retest tox - Run Python test suite with tox
  • /retest build-container - Rebuild and test container image
  • /retest all - Run all available tests

Container Operations

  • /build-and-push-container - Build and push container image (tagged with PR number)
    • Supports additional build arguments: /build-and-push-container --build-arg KEY=value

Cherry-pick Operations

  • /cherry-pick <branch> - Schedule cherry-pick to target branch when PR is merged
    • Multiple branches: /cherry-pick branch1 branch2 branch3

Label Management

  • /<label-name> - Add a label to the PR
  • /<label-name> cancel - Remove a label from the PR

✅ Merge Requirements

This PR will be automatically approved when the following conditions are met:

  1. Approval: /approve from at least one approver
  2. LGTM Count: Minimum 2 /lgtm from reviewers
  3. Status Checks: All required status checks must pass
  4. No Blockers: No WIP, hold, or conflict labels
  5. Verified: PR must be marked as verified (if verification is enabled)

📊 Review Process

Approvers and Reviewers

Approvers:

  • dshchedr
  • myakove
  • rnetser
  • vsibirsk

Reviewers:

  • OhadRevah
  • RoniKishner
  • dshchedr
  • hmeir
  • rnetser
  • vsibirsk
Available Labels
  • hold
  • verified
  • wip
  • lgtm
  • approve
  • automerge

💡 Tips

  • WIP Status: Use /wip when your PR is not ready for review
  • Verification: The verified label is automatically removed on each new commit
  • Cherry-picking: Cherry-pick labels are processed when the PR is merged
  • Container Builds: Container images are automatically tagged with the PR number
  • Permission Levels: Some commands require approver permissions
  • Auto-verified Users: Certain users have automatic verification and merge privileges

For more information, please refer to the project documentation or contact the maintainers.

@openshift-virtualization-qe-bot

D/S test tox -e verify-tc-requirement-polarion failed: cnv-tests-tox-executor/13252

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants