-
Notifications
You must be signed in to change notification settings - Fork 21
tests: handle case of acceptance-test-config.yml
with no valid test cases
#531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: handle case of acceptance-test-config.yml
with no valid test cases
#531
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request modifies the behavior of the function get_scenarios to handle cases where a declared category does not have a corresponding "tests" key in the acceptance tests configuration. The key changes include:
- Updating the condition to check both for the existence of the category and the "tests" key.
- Removing the ValueError in favor of returning an empty list when no tests are found.
Comments suppressed due to low confidence (1)
airbyte_cdk/test/standard_tests/connector_base.py:162
- [nitpick] Consider adding a comment to explain why an empty list is returned when tests are absent for the category.
return []
📝 WalkthroughWalkthroughThe update enhances the test framework for Airbyte connectors by extending scenario aggregation across multiple test categories, adding support for the "spec" verb in test jobs, and enabling optional test scenarios in connector creation and job execution. It also introduces a new standard test for connector specifications and adds a property to filter spec messages from entrypoint outputs. Additionally, it refines configuration handling in test scenarios to optionally allow empty configs. Changes
Sequence Diagram(s)sequenceDiagram
participant TestRunner
participant ConnectorBase
participant JobRunner
participant EntrypointOutput
TestRunner->>ConnectorBase: get_scenarios(config)
ConnectorBase->>ConnectorBase: Aggregate scenarios from "connection" and "spec" categories
ConnectorBase-->>TestRunner: Return combined filtered scenarios list
TestRunner->>ConnectorBase: create_connector(scenario=None)
ConnectorBase-->>TestRunner: Connector instance with config including manifest and optional scenario config
TestRunner->>JobRunner: run_test_job(connector, verb="spec", test_scenario=None)
JobRunner->>EntrypointOutput: _run_command(...)
EntrypointOutput-->>JobRunner: Output messages
JobRunner-->>TestRunner: EntrypointOutput
TestRunner->>EntrypointOutput: spec_messages
EntrypointOutput-->>TestRunner: Filtered SPEC messages list
Suggested reviewers
Would you like me to help draft a quick summary for the changelog or release notes as well? Wdyt? 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (10)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
airbyte_cdk/test/standard_tests/connector_base.py (1)
145-150
: Minor observation about documentation vs implementation.The comment says "Get acceptance tests for a given category" but the function hardcodes the category to "connection". Would it make sense to either update the docstring or make the category configurable through a parameter? This isn't related to your current change, just something I noticed while reviewing.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
airbyte_cdk/test/standard_tests/connector_base.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Check: 'source-pokeapi' (skip=false)
- GitHub Check: Check: 'source-amplitude' (skip=false)
- GitHub Check: Check: 'source-shopify' (skip=false)
- GitHub Check: Check: 'source-hardcoded-records' (skip=false)
- GitHub Check: Pytest (All, Python 3.11, Ubuntu)
- GitHub Check: SDM Docker Image Build
- GitHub Check: Pytest (All, Python 3.10, Ubuntu)
- GitHub Check: Pytest (Fast)
- GitHub Check: Analyze (python)
🔇 Additional comments (1)
airbyte_cdk/test/standard_tests/connector_base.py (1)
157-162
: Good refactoring to handle missing 'tests' gracefully!This change nicely handles the case where a category exists but has no 'tests' key (which happens when 'bypass_reason' is present). Would it help debugging to add a debug log when this specific case is encountered? Maybe something like:
if ( category not in all_tests_config["acceptance_tests"] or "tests" not in all_tests_config["acceptance_tests"][category] ): + if category in all_tests_config["acceptance_tests"] and "tests" not in all_tests_config["acceptance_tests"][category]: + logging.debug(f"Category '{category}' exists but has no 'tests' key. Returning empty list.") return []Just a thought - wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
airbyte_cdk/test/standard_tests/connector_base.py (1)
1-180
:⚠️ Potential issueFix Ruff formatting issue
The linter reports a formatting issue in this file. Run 'ruff format' to address it.
🧰 Tools
🪛 GitHub Actions: Linters
[error] 1-1: Ruff formatting check failed. File would be reformatted. Run 'ruff format' to fix code style issues.
🧹 Nitpick comments (1)
airbyte_cdk/test/standard_tests/source_base.py (1)
67-86
: Improved test coverage with new spec test methodThis addition introduces a nice standardized test for the
spec
verb, which runs without requiring a test scenario. Good error handling and clear assertion messages.I'm wondering if the tuple formatting in line 83-84 might need attention? It seems the f-string prefix is used with the error message, but the result is in a tuple which might not format as expected.
- assert len(result.spec_messages) == 1, ( - "Expected exactly 1 spec message but got {len(result.spec_messages)}", - result.errors, - ) + assert len(result.spec_messages) == 1, ( + f"Expected exactly 1 spec message but got {len(result.spec_messages)}", + result.errors, + )What do you think?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
airbyte_cdk/test/entrypoint_wrapper.py
(1 hunks)airbyte_cdk/test/standard_tests/_job_runner.py
(6 hunks)airbyte_cdk/test/standard_tests/connector_base.py
(2 hunks)airbyte_cdk/test/standard_tests/declarative_sources.py
(2 hunks)airbyte_cdk/test/standard_tests/source_base.py
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- airbyte_cdk/test/entrypoint_wrapper.py
🧰 Additional context used
🪛 GitHub Actions: Linters
airbyte_cdk/test/standard_tests/connector_base.py
[error] 1-1: Ruff formatting check failed. File would be reformatted. Run 'ruff format' to fix code style issues.
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Check: 'source-pokeapi' (skip=false)
- GitHub Check: Check: 'source-amplitude' (skip=false)
- GitHub Check: Check: 'source-shopify' (skip=false)
- GitHub Check: Check: 'source-hardcoded-records' (skip=false)
- GitHub Check: Pytest (All, Python 3.11, Ubuntu)
- GitHub Check: Pytest (All, Python 3.10, Ubuntu)
- GitHub Check: SDM Docker Image Build
- GitHub Check: Analyze (python)
- GitHub Check: Pytest (Fast)
🔇 Additional comments (12)
airbyte_cdk/test/standard_tests/declarative_sources.py (2)
67-67
: LGTM! Added support for optional scenariosThe parameter update correctly makes
scenario
optional, aligning with the newtest_spec
method requirements.
77-82
: Good defensive implementation for handling optional scenariosThis restructuring safely initializes the config with the manifest and conditionally updates it with scenario data if provided. Clean, defensive programming.
airbyte_cdk/test/standard_tests/connector_base.py (2)
92-92
: LGTM! Added support for optional scenariosThe parameter update correctly makes
scenario
optional, aligning with the newtest_spec
method requirements.
150-180
:❓ Verification inconclusive
Improved test scenario discovery across multiple categories
The refactored
get_scenarios
method now aggregates test scenarios from multiple categories and handles missing "tests" sections gracefully. This is a nice enhancement that supports the PR's objective of handling cases where a declared category doesn't have 'tests'.
🏁 Script executed:
#!/bin/bash # Check for the existence of acceptance test config files in the repo # and look for categories that might be missing 'tests' sections rg --type yaml "acceptance_tests:" -A 20Length of output: 2280
Consider dynamic test category discovery
Currently, the code hard-codes
categories = ["connection", "spec"]which will skip other sections like
discovery
,basic_read
orincremental
even when they have tests (and your sample YAML shows them present). Would you consider deriving this list directly from the config—e.g.:categories = list(all_tests_config["acceptance_tests"].keys())—so that any new or custom categories are automatically picked up? wdyt?
• File: airbyte_cdk/test/standard_tests/connector_base.py
– Line ~151:categories = ["connection", "spec"]
airbyte_cdk/test/standard_tests/_job_runner.py (8)
59-59
: LGTM! Added 'spec' verb supportThe addition of "spec" to the list of accepted verbs supports the new
test_spec
method.
61-61
: LGTM! Made test_scenario parameter optionalMaking the
test_scenario
parameter optional (defaulting to None) correctly supports the new spec test flow.
84-93
: LGTM! Added safety checks for optional test_scenarioThe added conditionals properly check for
test_scenario
before accessing its attributes, preventing potential NoneType errors.
115-120
: Good refactoring of exception handling logicCreating a local
expect_exception
variable centralizes the exception expectation logic and makes the code more maintainable. Nice clean-up!
124-124
: LGTM! Updated exception handling with optional scenario supportThe modified logic for setting
expecting_exception
parameter properly handles the case whentest_scenario
is None.
126-127
: LGTM! Added safety check for exception handlingThe added check for
test_scenario
in the conditional properly handles scenarios where test_scenario is None.
141-141
: LGTM! Updated to use local expect_exception variableUsing the local
expect_exception
variable here makes the code more consistent and readable.
155-155
: LGTM! Updated to use local expect_exception variableUsing the local
expect_exception
variable here makes the code more consistent and readable.
acceptance-test-config.yml
with no valid test cases.
acceptance-test-config.yml
with no valid test cases.acceptance-test-config.yml
with no valid test cases
/autofix
|
/autofix
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approving with some comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
airbyte_cdk/test/standard_tests/models/scenario.py (1)
16-16
: Unused import detected.The
empty
import from numpy doesn't appear to be used anywhere in this file. Would you consider removing this unused import to keep the imports clean? wdyt?- from numpy import empty
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
airbyte_cdk/test/standard_tests/_job_runner.py
(2 hunks)airbyte_cdk/test/standard_tests/declarative_sources.py
(2 hunks)airbyte_cdk/test/standard_tests/models/scenario.py
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- airbyte_cdk/test/standard_tests/declarative_sources.py
- airbyte_cdk/test/standard_tests/_job_runner.py
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Check: 'source-pokeapi' (skip=false)
- GitHub Check: Check: 'source-amplitude' (skip=false)
- GitHub Check: Check: 'source-shopify' (skip=false)
- GitHub Check: Check: 'source-hardcoded-records' (skip=false)
- GitHub Check: Pytest (All, Python 3.11, Ubuntu)
- GitHub Check: Pytest (Fast)
- GitHub Check: Pytest (All, Python 3.10, Ubuntu)
- GitHub Check: SDM Docker Image Build
- GitHub Check: Analyze (python)
🔇 Additional comments (1)
airbyte_cdk/test/standard_tests/models/scenario.py (1)
47-70
: Nice enhancement to handle optional configuration gracefully!I like the addition of the
empty_if_missing
parameter to allow callers to decide how to handle missing configurations. This aligns well with the PR objective of handling test cases where valid configurations may not exist. The documentation for the parameter is clear and helpful too.This change should successfully address the issue with connectors like
source-7shift
that have acceptance test configurations without valid test cases.
/autofix
|
This PR handles the case of missing config in
acceptance-test-config.yml
. Specifically, it now runs successfully onsource-7shift
, which has the following definition:This test config file is especially useless because
spec
is not the same asmanifest.yml
and so it's not really helpful at all in defining test cases.Nevertheless, we test what we can and the tests succeed now:
Summary by CodeRabbit
Summary by CodeRabbit
New Features
Bug Fixes