diff --git a/source/batch-mode/batch-read.txt b/source/batch-mode/batch-read.txt index 0d777de..0e0dae9 100644 --- a/source/batch-mode/batch-read.txt +++ b/source/batch-mode/batch-read.txt @@ -135,6 +135,11 @@ Filters tabs: + - id: java-sync + content: | + + .. include:: /java/filters.rst + - id: python content: | diff --git a/source/java/filters.rst b/source/java/filters.rst new file mode 100644 index 0000000..d0a203d --- /dev/null +++ b/source/java/filters.rst @@ -0,0 +1,35 @@ +.. include:: /includes/pushed-filters.rst + +You can use :driver:`Java Aggregation Expressions +` to filter +your data. + +.. include:: /includes/example-load-dataframe.rst + +First, create a DataFrame to connect to your default MongoDB data source: + +.. code-block:: java + + Dataset df = spark.read() + .format("mongodb") + .option("database", "food") + .option("collection", "fruit") + .load(); + +The following example retrieves only records in which the value of ``qty`` field +is greater than or equal to ``10``: + +.. code-block:: java + + df.filter(df.col("qty").gte(10)) + +The operation outputs the following: + +.. code-block:: none + + +---+----+------+ + |_id| qty| type| + +---+----+------+ + |2.0|10.0|orange| + |3.0|15.0|banana| + +---+----+------+