Skip to content

Commit 028ddc1

Browse files
authored
[Spark] Add Filters section for Java (#226)
* java filters tab
1 parent ca7f722 commit 028ddc1

File tree

2 files changed

+40
-0
lines changed

2 files changed

+40
-0
lines changed

source/batch-mode/batch-read.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,11 @@ Filters
135135

136136
tabs:
137137

138+
- id: java-sync
139+
content: |
140+
141+
.. include:: /java/filters.rst
142+
138143
- id: python
139144
content: |
140145

source/java/filters.rst

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
.. include:: /includes/pushed-filters.rst
2+
3+
You can use :driver:`Java Aggregation Expressions
4+
</java/sync/upcoming/fundamentals/aggregation-expression-operations/>` to filter
5+
your data.
6+
7+
.. include:: /includes/example-load-dataframe.rst
8+
9+
First, create a DataFrame to connect to your default MongoDB data source:
10+
11+
.. code-block:: java
12+
13+
Dataset<Row> df = spark.read()
14+
.format("mongodb")
15+
.option("database", "food")
16+
.option("collection", "fruit")
17+
.load();
18+
19+
The following example retrieves only records in which the value of ``qty`` field
20+
is greater than or equal to ``10``:
21+
22+
.. code-block:: java
23+
24+
df.filter(df.col("qty").gte(10))
25+
26+
The operation outputs the following:
27+
28+
.. code-block:: none
29+
30+
+---+----+------+
31+
|_id| qty| type|
32+
+---+----+------+
33+
|2.0|10.0|orange|
34+
|3.0|15.0|banana|
35+
+---+----+------+

0 commit comments

Comments
 (0)