Skip to content

Commit

Permalink
[SPARK-42984][CONNECT][PYTHON][TESTS] Enable test_createDataFrame_wit…
Browse files Browse the repository at this point in the history
…h_single_data_type

### What changes were proposed in this pull request?

Enables `ArrowParityTests.test_createDataFrame_with_single_data_type`.

### Why are the changes needed?

The test is already fixed by previous commits.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Enabled/updated the related tests.

Closes apache#40828 from ueshin/issues/SPARK-42984/test.

Authored-by: Takuya UESHIN <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
  • Loading branch information
ueshin authored and zhengruifeng committed Apr 18, 2023
1 parent 40872e9 commit 61e8c5b
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
2 changes: 0 additions & 2 deletions python/pyspark/sql/tests/connect/test_parity_arrow.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,6 @@ def test_createDataFrame_with_map_type(self):
def test_createDataFrame_with_ndarray(self):
self.check_createDataFrame_with_ndarray(True)

# TODO(SPARK-42984): ValueError not raised
@unittest.skip("Fails in Spark Connect, should enable.")
def test_createDataFrame_with_single_data_type(self):
self.check_createDataFrame_with_single_data_type()

Expand Down
6 changes: 4 additions & 2 deletions python/pyspark/sql/tests/test_arrow.py
Original file line number Diff line number Diff line change
Expand Up @@ -533,8 +533,10 @@ def test_createDataFrame_with_single_data_type(self):
self.check_createDataFrame_with_single_data_type()

def check_createDataFrame_with_single_data_type(self):
with self.assertRaisesRegex(ValueError, ".*IntegerType.*not supported.*"):
self.spark.createDataFrame(pd.DataFrame({"a": [1]}), schema="int").collect()
for schema in ["int", IntegerType()]:
with self.subTest(schema=schema):
with self.assertRaisesRegex(ValueError, ".*IntegerType.*not supported.*"):
self.spark.createDataFrame(pd.DataFrame({"a": [1]}), schema=schema).collect()

def test_createDataFrame_does_not_modify_input(self):
# Some series get converted for Spark to consume, this makes sure input is unchanged
Expand Down

0 comments on commit 61e8c5b

Please sign in to comment.