-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Support pandas in BigQuery cache #597
base: main
Are you sure you want to change the base?
Feat: Support pandas in BigQuery cache #597
Conversation
Warning Rate limit exceeded@jscheel has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 17 minutes and 30 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (5)
📝 WalkthroughWalkthroughThis pull request introduces modifications to the caching and data reading mechanisms in the Airbyte project. The changes primarily focus on enhancing the data retrieval process for different cache types, with a specific emphasis on BigQuery integration. The modifications include adding a new private method for reading SQL tables into Pandas DataFrames, updating import statements, and adding a new dependency to support BigQuery data handling. Changes
Possibly related PRs
Suggested Labels
Suggested Reviewers
Hey there! 👋 I noticed some interesting changes in the caching and data retrieval mechanisms. The new The addition of Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (6)
airbyte/caches/bigquery.py (3)
55-60
: Consider adding a docstring.Would it help to include a docstring here explaining
_read_to_pandas_dataframe
usage, the newchunksize
parameter, and any exceptions that might arise? Wdyt?
61-64
: Validate or document popped keyword arguments.We're discarding
"con"
and"schema"
. Is there a reason to ignore them silently versus logging a warning or raising an error to inform the caller? Wdyt?
82-83
: Confirm returning unaltered DataFrame.We’re returning the DataFrame as-is. Any chance we'd want a copy or some form of read-only structure for safety? Wdyt?
tests/integration_tests/cloud/test_cloud_sql_reads.py (1)
72-72
: Explore partial validation of DataFrame contents.We assert
pandas_df.shape == (100, 20)
, which is great for row-column checks. Might it be useful to validate a few columns or cells to ensure data integrity? Wdyt?tests/integration_tests/test_all_cache_types.py (1)
161-165
: Double-check boundary conditions for chunk batching.We're splitting the Arrow dataset into chunks of size 10 and expecting 20 batches for 200 rows. Do we want a test verifying off-by-one or partial chunks if the row count isn't divisible by 10? Wdyt?
airbyte/caches/base.py (1)
184-191
: Consider adding a docstring to explain the method's purpose?The method looks good, but adding a docstring would help explain its role as an extension point for different cache implementations, wdyt?
def _read_to_pandas_dataframe( self, table_name: str, con: Engine, **kwargs, ) -> pd.DataFrame: + """Read a SQL table into a pandas DataFrame. + + This method serves as an extension point for different cache implementations + to customize how they read data into pandas DataFrames. + + Args: + table_name: Name of the table to read + con: SQLAlchemy engine connection + **kwargs: Additional arguments passed to the underlying read implementation + + Returns: + pd.DataFrame: The table data as a pandas DataFrame + """ return pd.read_sql_table(table_name, con=con, **kwargs)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
airbyte/caches/base.py
(5 hunks)airbyte/caches/bigquery.py
(2 hunks)pyproject.toml
(1 hunks)tests/integration_tests/cloud/test_cloud_sql_reads.py
(2 hunks)tests/integration_tests/test_all_cache_types.py
(1 hunks)
🧰 Additional context used
📓 Learnings (1)
airbyte/caches/bigquery.py (1)
Learnt from: aaronsteers
PR: airbytehq/PyAirbyte#281
File: airbyte/caches/bigquery.py:40-43
Timestamp: 2024-11-10T16:30:14.198Z
Learning: The `BigQueryCache.get_arrow_dataset` method should have a docstring that correctly states the reason for the `NotImplementedError` as BigQuery not supporting `to_arrow`, instead of incorrectly mentioning `pd.read_sql_table`.
🔇 Additional comments (9)
airbyte/caches/bigquery.py (4)
22-23
: Ensure required version constraints for new libraries.We've introduced
pandas
andpandas_gbq
. Are you confident that ourpyproject.toml
or similar config file suitably pins or restricts their versions? This helps avoid unforeseen compatibility issues. Wdyt?
37-38
: Check Python version for type hinting.Importing
Iterator
fromcollections.abc
is a good practice. However, the|
union syntax inchunksize: int | None
requires Python 3.10+. Is that guaranteed by our environment orpyproject.toml
? Wdyt?
65-71
: Handle potentialread_gbq
exceptions.If
pandas_gbq.read_gbq
fails (e.g., invalid credentials or query errors), do we want to catch and handle those exceptions more gracefully, perhaps with logging? Wdyt?
74-77
: Confirm the returned format frompandas_gbq
.
pandas_gbq.read_gbq
typically returns aDataFrame
. Is there a scenario where it doesn't, and do we need to handle it differently? Wdyt?tests/integration_tests/cloud/test_cloud_sql_reads.py (1)
175-177
: Consider verifying DataFrame columns.We again confirm
(100, 20)
. Would you like to confirm column names or types here, ensuring consistent schemas across jobs? Wdyt?airbyte/caches/base.py (3)
31-31
: LGTM! Good type safety practice.The explicit import of
Engine
from sqlalchemy.engine enhances type safety for the new pandas integration.
71-73
: LGTM! Nice formatting.The multi-line method signature improves readability.
206-210
: LGTM! Nice refactoring.Good job extracting the common pandas reading logic into
_read_to_pandas_dataframe
. The chunked reading approach inget_arrow_dataset
is particularly nice for handling large tables efficiently.Also applies to: 223-223
pyproject.toml (1)
32-32
: LGTM! Good version constraint.The addition of
pandas-gbq
with>=0.26.1
constraint is appropriate for the BigQuery pandas integration. The minimum version is recent enough to ensure good feature support while allowing for future updates.
# Return chunks as iterator if chunksize is provided | ||
if chunksize is not None: | ||
return (result[i : i + chunksize] for i in range(0, len(result), chunksize)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Revisit chunking performance.
For very large tables, returning chunked slices of the DataFrame might still be memory-intensive as the entire DataFrame is loaded first. Would you consider a chunked read directly from pandas_gbq
instead, if available? Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pandas_gbq doesn't support it :(
7ca8d05
to
56513e8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
airbyte/caches/bigquery.py (2)
61-63
: Consider handling unused kwargs differently.Instead of popping unused kwargs, would it be clearer to explicitly define the expected kwargs and pass only those to pandas_gbq.read_gbq? This could make the API more explicit and prevent silent parameter drops. Wdyt?
55-60
: Add type hints for kwargs.Would it be helpful to add type hints for the expected kwargs? This could make it clearer what parameters are supported. Maybe something like:
- def _read_to_pandas_dataframe( - self, - table_name: str, - chunksize: int | None = None, - **kwargs, - ) -> pd.DataFrame | Iterator[pd.DataFrame]: + def _read_to_pandas_dataframe( + self, + table_name: str, + chunksize: int | None = None, + **kwargs: Any, # or more specific types based on pandas_gbq.read_gbq parameters + ) -> pd.DataFrame | Iterator[pd.DataFrame]:Wdyt?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
airbyte/caches/base.py
(5 hunks)airbyte/caches/bigquery.py
(2 hunks)pyproject.toml
(2 hunks)tests/integration_tests/cloud/test_cloud_sql_reads.py
(2 hunks)tests/integration_tests/test_all_cache_types.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- tests/integration_tests/cloud/test_cloud_sql_reads.py
- tests/integration_tests/test_all_cache_types.py
- airbyte/caches/base.py
- pyproject.toml
🔇 Additional comments (2)
airbyte/caches/bigquery.py (2)
78-81
: Consider memory-efficient chunking implementation.I noticed this was mentioned in a previous review. The current implementation still loads the entire DataFrame into memory before chunking. Would you consider using pandas_gbq's native chunking capabilities if available? This could help with memory usage for large tables. Wdyt?
22-23
: Verify pandas_gbq version compatibility.The pandas_gbq integration looks good! However, should we specify a minimum version requirement for pandas_gbq to ensure compatibility? This could help prevent potential issues with older versions. Wdyt?
# Read the table using pandas_gbq | ||
credentials = Credentials.from_service_account_file(self.credentials_path) | ||
result = pandas_gbq.read_gbq( | ||
f"{self.project_name}.{self.dataset_name}.{table_name}", | ||
project_id=self.project_name, | ||
credentials=credentials, | ||
**kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for credentials loading.
The credentials loading could fail for various reasons (file not found, invalid credentials, etc.). Should we add some error handling here? Maybe something like:
- credentials = Credentials.from_service_account_file(self.credentials_path)
+ try:
+ credentials = Credentials.from_service_account_file(self.credentials_path)
+ except FileNotFoundError as e:
+ raise ValueError(f"Credentials file not found at {self.credentials_path}") from e
+ except Exception as e:
+ raise ValueError(f"Failed to load credentials: {str(e)}") from e
Wdyt?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# Read the table using pandas_gbq | |
credentials = Credentials.from_service_account_file(self.credentials_path) | |
result = pandas_gbq.read_gbq( | |
f"{self.project_name}.{self.dataset_name}.{table_name}", | |
project_id=self.project_name, | |
credentials=credentials, | |
**kwargs, | |
# Read the table using pandas_gbq | |
try: | |
credentials = Credentials.from_service_account_file(self.credentials_path) | |
except FileNotFoundError as e: | |
raise ValueError(f"Credentials file not found at {self.credentials_path}") from e | |
except Exception as e: | |
raise ValueError(f"Failed to load credentials: {str(e)}") from e | |
result = pandas_gbq.read_gbq( | |
f"{self.project_name}.{self.dataset_name}.{table_name}", | |
project_id=self.project_name, | |
credentials=credentials, | |
**kwargs, |
56513e8
to
dc810cb
Compare
Uses the pandas_gpb library to support fetching BigQuery tables to pandas DataFrames.
Summary by CodeRabbit
New Features
Dependencies
pandas-gbq
package to project dependencies.Improvements
Bug Fixes